Feb 14 01:09:00.010148 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 14 01:09:00.010184 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 01:09:00.010198 kernel: BIOS-provided physical RAM map: Feb 14 01:09:00.010214 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 14 01:09:00.010223 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 14 01:09:00.010233 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 14 01:09:00.010245 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 14 01:09:00.010255 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 14 01:09:00.010265 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 14 01:09:00.010275 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 14 01:09:00.010286 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 14 01:09:00.010296 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 14 01:09:00.010311 kernel: NX (Execute Disable) protection: active Feb 14 01:09:00.010322 kernel: APIC: Static calls initialized Feb 14 01:09:00.010334 kernel: SMBIOS 2.8 present. Feb 14 01:09:00.010346 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 14 01:09:00.010357 kernel: Hypervisor detected: KVM Feb 14 01:09:00.010373 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 14 01:09:00.010384 kernel: kvm-clock: using sched offset of 4313652535 cycles Feb 14 01:09:00.010396 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 14 01:09:00.010407 kernel: tsc: Detected 2499.998 MHz processor Feb 14 01:09:00.010419 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 14 01:09:00.010430 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 14 01:09:00.010441 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 14 01:09:00.010467 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 14 01:09:00.010479 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 14 01:09:00.010495 kernel: Using GB pages for direct mapping Feb 14 01:09:00.010507 kernel: ACPI: Early table checksum verification disabled Feb 14 01:09:00.010518 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 14 01:09:00.010530 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:09:00.010541 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:09:00.010552 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:09:00.010564 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 14 01:09:00.010575 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:09:00.010586 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:09:00.010602 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:09:00.010614 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:09:00.010625 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 14 01:09:00.010636 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 14 01:09:00.010648 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 14 01:09:00.010665 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 14 01:09:00.010689 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 14 01:09:00.010707 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 14 01:09:00.010719 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 14 01:09:00.010731 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 14 01:09:00.010743 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 14 01:09:00.010755 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 14 01:09:00.010766 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 14 01:09:00.010778 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 14 01:09:00.010794 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 14 01:09:00.010806 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 14 01:09:00.010817 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 14 01:09:00.010829 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 14 01:09:00.010840 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 14 01:09:00.010852 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 14 01:09:00.010864 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 14 01:09:00.010875 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 14 01:09:00.010887 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 14 01:09:00.010898 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 14 01:09:00.010915 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 14 01:09:00.010927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 14 01:09:00.010938 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 14 01:09:00.010950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 14 01:09:00.010962 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 14 01:09:00.010974 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 14 01:09:00.010986 kernel: Zone ranges: Feb 14 01:09:00.010998 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 14 01:09:00.011010 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 14 01:09:00.011026 kernel: Normal empty Feb 14 01:09:00.011038 kernel: Movable zone start for each node Feb 14 01:09:00.011050 kernel: Early memory node ranges Feb 14 01:09:00.011062 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 14 01:09:00.011073 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 14 01:09:00.011085 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 14 01:09:00.011097 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 14 01:09:00.011108 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 14 01:09:00.011120 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 14 01:09:00.011132 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 14 01:09:00.011148 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 14 01:09:00.011160 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 14 01:09:00.011172 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 14 01:09:00.011184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 14 01:09:00.011196 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 14 01:09:00.011207 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 14 01:09:00.011219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 14 01:09:00.011231 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 14 01:09:00.011242 kernel: TSC deadline timer available Feb 14 01:09:00.011259 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 14 01:09:00.011271 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 14 01:09:00.011283 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 14 01:09:00.011295 kernel: Booting paravirtualized kernel on KVM Feb 14 01:09:00.011306 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 14 01:09:00.011318 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 14 01:09:00.011330 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 14 01:09:00.011342 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 14 01:09:00.011353 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 14 01:09:00.011370 kernel: kvm-guest: PV spinlocks enabled Feb 14 01:09:00.011382 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 14 01:09:00.011395 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 01:09:00.011408 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 14 01:09:00.011419 kernel: random: crng init done Feb 14 01:09:00.011431 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 01:09:00.011443 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 14 01:09:00.011468 kernel: Fallback order for Node 0: 0 Feb 14 01:09:00.011486 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 14 01:09:00.011498 kernel: Policy zone: DMA32 Feb 14 01:09:00.011510 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 14 01:09:00.011522 kernel: software IO TLB: area num 16. Feb 14 01:09:00.011534 kernel: Memory: 1901520K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 194836K reserved, 0K cma-reserved) Feb 14 01:09:00.011546 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 14 01:09:00.011558 kernel: Kernel/User page tables isolation: enabled Feb 14 01:09:00.011570 kernel: ftrace: allocating 37921 entries in 149 pages Feb 14 01:09:00.011582 kernel: ftrace: allocated 149 pages with 4 groups Feb 14 01:09:00.011599 kernel: Dynamic Preempt: voluntary Feb 14 01:09:00.011611 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 14 01:09:00.011624 kernel: rcu: RCU event tracing is enabled. Feb 14 01:09:00.011636 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 14 01:09:00.011648 kernel: Trampoline variant of Tasks RCU enabled. Feb 14 01:09:00.011681 kernel: Rude variant of Tasks RCU enabled. Feb 14 01:09:00.011701 kernel: Tracing variant of Tasks RCU enabled. Feb 14 01:09:00.011713 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 14 01:09:00.011726 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 14 01:09:00.011738 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 14 01:09:00.011750 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 14 01:09:00.011763 kernel: Console: colour VGA+ 80x25 Feb 14 01:09:00.011779 kernel: printk: console [tty0] enabled Feb 14 01:09:00.011792 kernel: printk: console [ttyS0] enabled Feb 14 01:09:00.011804 kernel: ACPI: Core revision 20230628 Feb 14 01:09:00.011817 kernel: APIC: Switch to symmetric I/O mode setup Feb 14 01:09:00.011829 kernel: x2apic enabled Feb 14 01:09:00.011846 kernel: APIC: Switched APIC routing to: physical x2apic Feb 14 01:09:00.011859 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 14 01:09:00.011872 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Feb 14 01:09:00.011884 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 14 01:09:00.011897 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 14 01:09:00.011909 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 14 01:09:00.011921 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 14 01:09:00.011933 kernel: Spectre V2 : Mitigation: Retpolines Feb 14 01:09:00.011946 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 14 01:09:00.011963 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 14 01:09:00.011976 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 14 01:09:00.011988 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 14 01:09:00.012000 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 14 01:09:00.012012 kernel: MDS: Mitigation: Clear CPU buffers Feb 14 01:09:00.012025 kernel: MMIO Stale Data: Unknown: No mitigations Feb 14 01:09:00.012037 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 14 01:09:00.012049 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 14 01:09:00.012062 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 14 01:09:00.012074 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 14 01:09:00.012086 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 14 01:09:00.012103 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 14 01:09:00.012116 kernel: Freeing SMP alternatives memory: 32K Feb 14 01:09:00.012128 kernel: pid_max: default: 32768 minimum: 301 Feb 14 01:09:00.012140 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 14 01:09:00.012153 kernel: landlock: Up and running. Feb 14 01:09:00.012165 kernel: SELinux: Initializing. Feb 14 01:09:00.012177 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 14 01:09:00.012189 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 14 01:09:00.012202 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 14 01:09:00.012214 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 01:09:00.012227 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 01:09:00.012245 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 01:09:00.012257 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 14 01:09:00.012270 kernel: signal: max sigframe size: 1776 Feb 14 01:09:00.012282 kernel: rcu: Hierarchical SRCU implementation. Feb 14 01:09:00.012295 kernel: rcu: Max phase no-delay instances is 400. Feb 14 01:09:00.012307 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 14 01:09:00.012320 kernel: smp: Bringing up secondary CPUs ... Feb 14 01:09:00.012332 kernel: smpboot: x86: Booting SMP configuration: Feb 14 01:09:00.012345 kernel: .... node #0, CPUs: #1 Feb 14 01:09:00.012362 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 14 01:09:00.012375 kernel: smp: Brought up 1 node, 2 CPUs Feb 14 01:09:00.012387 kernel: smpboot: Max logical packages: 16 Feb 14 01:09:00.012399 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Feb 14 01:09:00.012412 kernel: devtmpfs: initialized Feb 14 01:09:00.012424 kernel: x86/mm: Memory block size: 128MB Feb 14 01:09:00.012436 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 14 01:09:00.012467 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 14 01:09:00.012481 kernel: pinctrl core: initialized pinctrl subsystem Feb 14 01:09:00.012499 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 14 01:09:00.012512 kernel: audit: initializing netlink subsys (disabled) Feb 14 01:09:00.012524 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 14 01:09:00.012537 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 14 01:09:00.012549 kernel: audit: type=2000 audit(1739495337.913:1): state=initialized audit_enabled=0 res=1 Feb 14 01:09:00.012561 kernel: cpuidle: using governor menu Feb 14 01:09:00.012574 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 14 01:09:00.012586 kernel: dca service started, version 1.12.1 Feb 14 01:09:00.012599 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 14 01:09:00.012616 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 14 01:09:00.012629 kernel: PCI: Using configuration type 1 for base access Feb 14 01:09:00.012641 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 14 01:09:00.012654 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 14 01:09:00.012666 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 14 01:09:00.012690 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 14 01:09:00.012703 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 14 01:09:00.012715 kernel: ACPI: Added _OSI(Module Device) Feb 14 01:09:00.012727 kernel: ACPI: Added _OSI(Processor Device) Feb 14 01:09:00.012745 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 14 01:09:00.012758 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 14 01:09:00.012770 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 14 01:09:00.012783 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 14 01:09:00.012795 kernel: ACPI: Interpreter enabled Feb 14 01:09:00.012808 kernel: ACPI: PM: (supports S0 S5) Feb 14 01:09:00.012820 kernel: ACPI: Using IOAPIC for interrupt routing Feb 14 01:09:00.012833 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 14 01:09:00.012845 kernel: PCI: Using E820 reservations for host bridge windows Feb 14 01:09:00.012862 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 14 01:09:00.012874 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 14 01:09:00.013110 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:09:00.013289 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 14 01:09:00.013481 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 14 01:09:00.013502 kernel: PCI host bridge to bus 0000:00 Feb 14 01:09:00.013688 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 14 01:09:00.013854 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 14 01:09:00.014017 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 14 01:09:00.014174 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 14 01:09:00.014322 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 14 01:09:00.014484 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 14 01:09:00.014643 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 14 01:09:00.014843 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 14 01:09:00.015090 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 14 01:09:00.015273 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 14 01:09:00.015441 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 14 01:09:00.015626 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 14 01:09:00.015809 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 14 01:09:00.015993 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 14 01:09:00.016171 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 14 01:09:00.016348 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 14 01:09:00.016533 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 14 01:09:00.016740 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 14 01:09:00.016951 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 14 01:09:00.017218 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 14 01:09:00.017412 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 14 01:09:00.017627 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 14 01:09:00.017813 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 14 01:09:00.017993 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 14 01:09:00.018162 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 14 01:09:00.018338 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 14 01:09:00.019380 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 14 01:09:00.019591 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 14 01:09:00.019777 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 14 01:09:00.019955 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 14 01:09:00.020127 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 14 01:09:00.020295 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 14 01:09:00.020479 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 14 01:09:00.020658 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 14 01:09:00.020850 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 14 01:09:00.021020 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 14 01:09:00.021188 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 14 01:09:00.021355 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 14 01:09:00.021790 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 14 01:09:00.023593 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 14 01:09:00.023805 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 14 01:09:00.023977 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 14 01:09:00.024143 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 14 01:09:00.024340 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 14 01:09:00.025606 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 14 01:09:00.025819 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 14 01:09:00.026004 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 14 01:09:00.026175 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 14 01:09:00.026340 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 14 01:09:00.028091 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 01:09:00.028276 kernel: pci_bus 0000:02: extended config space not accessible Feb 14 01:09:00.028524 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 14 01:09:00.028730 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 14 01:09:00.028906 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 14 01:09:00.029076 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 14 01:09:00.029254 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 14 01:09:00.029427 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 14 01:09:00.030665 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 14 01:09:00.030854 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 14 01:09:00.031029 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 01:09:00.031210 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 14 01:09:00.031382 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 14 01:09:00.032611 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 14 01:09:00.032806 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 14 01:09:00.032975 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 01:09:00.033142 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 14 01:09:00.033306 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 14 01:09:00.033505 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 01:09:00.033685 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 14 01:09:00.033862 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 14 01:09:00.034030 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 01:09:00.034202 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 14 01:09:00.034369 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 14 01:09:00.035600 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 01:09:00.035787 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 14 01:09:00.035961 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 14 01:09:00.036124 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 01:09:00.036312 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 14 01:09:00.037515 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 14 01:09:00.037716 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 01:09:00.037738 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 14 01:09:00.037751 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 14 01:09:00.037764 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 14 01:09:00.037785 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 14 01:09:00.037798 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 14 01:09:00.037811 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 14 01:09:00.037823 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 14 01:09:00.037836 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 14 01:09:00.037849 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 14 01:09:00.037861 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 14 01:09:00.037874 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 14 01:09:00.037887 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 14 01:09:00.037904 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 14 01:09:00.037917 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 14 01:09:00.037930 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 14 01:09:00.037942 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 14 01:09:00.037955 kernel: iommu: Default domain type: Translated Feb 14 01:09:00.037968 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 14 01:09:00.037980 kernel: PCI: Using ACPI for IRQ routing Feb 14 01:09:00.037993 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 14 01:09:00.038006 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 14 01:09:00.038023 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 14 01:09:00.038187 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 14 01:09:00.038352 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 14 01:09:00.038537 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 14 01:09:00.038558 kernel: vgaarb: loaded Feb 14 01:09:00.038571 kernel: clocksource: Switched to clocksource kvm-clock Feb 14 01:09:00.038584 kernel: VFS: Disk quotas dquot_6.6.0 Feb 14 01:09:00.038597 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 14 01:09:00.038609 kernel: pnp: PnP ACPI init Feb 14 01:09:00.038799 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 14 01:09:00.038822 kernel: pnp: PnP ACPI: found 5 devices Feb 14 01:09:00.038835 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 14 01:09:00.038848 kernel: NET: Registered PF_INET protocol family Feb 14 01:09:00.038861 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 14 01:09:00.038874 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 14 01:09:00.038887 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 14 01:09:00.038900 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 14 01:09:00.038920 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 14 01:09:00.038933 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 14 01:09:00.038946 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 14 01:09:00.038959 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 14 01:09:00.038972 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 14 01:09:00.038984 kernel: NET: Registered PF_XDP protocol family Feb 14 01:09:00.039145 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 14 01:09:00.039311 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:09:00.039498 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:09:00.039666 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 14 01:09:00.039847 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 14 01:09:00.040011 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 14 01:09:00.040175 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 14 01:09:00.040340 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 14 01:09:00.041509 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 14 01:09:00.041698 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 14 01:09:00.041869 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 14 01:09:00.042036 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 14 01:09:00.042205 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 14 01:09:00.042372 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 14 01:09:00.043611 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 14 01:09:00.043803 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 14 01:09:00.044010 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 14 01:09:00.044191 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 14 01:09:00.044359 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 14 01:09:00.045581 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 14 01:09:00.045774 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 14 01:09:00.045942 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 01:09:00.046109 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 14 01:09:00.046274 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 14 01:09:00.046471 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 14 01:09:00.046644 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 01:09:00.046825 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 14 01:09:00.046994 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 14 01:09:00.047158 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 14 01:09:00.047333 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 01:09:00.047708 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 14 01:09:00.047878 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 14 01:09:00.048043 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 14 01:09:00.048208 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 01:09:00.048372 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 14 01:09:00.048624 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 14 01:09:00.048812 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 14 01:09:00.048982 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 01:09:00.049150 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 14 01:09:00.049326 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 14 01:09:00.049535 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 14 01:09:00.049716 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 01:09:00.049880 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 14 01:09:00.050042 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 14 01:09:00.050214 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 14 01:09:00.050378 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 01:09:00.050575 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 14 01:09:00.050762 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 14 01:09:00.050932 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 14 01:09:00.051104 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 01:09:00.051263 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 14 01:09:00.051417 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 14 01:09:00.051612 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 14 01:09:00.051784 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 14 01:09:00.051934 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 14 01:09:00.053039 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 14 01:09:00.053215 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 14 01:09:00.053372 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 14 01:09:00.054578 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 01:09:00.054773 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 14 01:09:00.054952 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 14 01:09:00.055111 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 14 01:09:00.055269 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 01:09:00.055439 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 14 01:09:00.056639 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 14 01:09:00.056812 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 01:09:00.056996 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 14 01:09:00.057157 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 14 01:09:00.057313 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 01:09:00.057509 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 14 01:09:00.057677 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 14 01:09:00.057836 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 01:09:00.058000 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 14 01:09:00.058165 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 14 01:09:00.058320 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 01:09:00.060520 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 14 01:09:00.060699 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 14 01:09:00.060860 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 01:09:00.061031 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 14 01:09:00.061189 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 14 01:09:00.061360 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 01:09:00.061382 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 14 01:09:00.061396 kernel: PCI: CLS 0 bytes, default 64 Feb 14 01:09:00.061410 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 14 01:09:00.061423 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 14 01:09:00.061437 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 14 01:09:00.063484 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 14 01:09:00.063500 kernel: Initialise system trusted keyrings Feb 14 01:09:00.063521 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 14 01:09:00.063535 kernel: Key type asymmetric registered Feb 14 01:09:00.063548 kernel: Asymmetric key parser 'x509' registered Feb 14 01:09:00.063562 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 14 01:09:00.063575 kernel: io scheduler mq-deadline registered Feb 14 01:09:00.063589 kernel: io scheduler kyber registered Feb 14 01:09:00.063602 kernel: io scheduler bfq registered Feb 14 01:09:00.063791 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 14 01:09:00.063963 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 14 01:09:00.064139 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:09:00.064308 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 14 01:09:00.066540 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 14 01:09:00.066738 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:09:00.066914 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 14 01:09:00.067083 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 14 01:09:00.067259 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:09:00.067427 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 14 01:09:00.067619 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 14 01:09:00.067800 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:09:00.067969 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 14 01:09:00.068134 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 14 01:09:00.068340 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:09:00.070564 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 14 01:09:00.070747 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 14 01:09:00.070914 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:09:00.071080 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 14 01:09:00.071245 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 14 01:09:00.071418 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:09:00.071614 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 14 01:09:00.071794 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 14 01:09:00.071962 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:09:00.071985 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 14 01:09:00.071999 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 14 01:09:00.072021 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 14 01:09:00.072035 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 14 01:09:00.072049 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 14 01:09:00.072062 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 14 01:09:00.072076 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 14 01:09:00.072089 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 14 01:09:00.072258 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 14 01:09:00.072281 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 14 01:09:00.072432 kernel: rtc_cmos 00:03: registered as rtc0 Feb 14 01:09:00.072631 kernel: rtc_cmos 00:03: setting system clock to 2025-02-14T01:08:59 UTC (1739495339) Feb 14 01:09:00.072799 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 14 01:09:00.072820 kernel: intel_pstate: CPU model not supported Feb 14 01:09:00.072834 kernel: NET: Registered PF_INET6 protocol family Feb 14 01:09:00.072847 kernel: Segment Routing with IPv6 Feb 14 01:09:00.072860 kernel: In-situ OAM (IOAM) with IPv6 Feb 14 01:09:00.072874 kernel: NET: Registered PF_PACKET protocol family Feb 14 01:09:00.072887 kernel: Key type dns_resolver registered Feb 14 01:09:00.072908 kernel: IPI shorthand broadcast: enabled Feb 14 01:09:00.072927 kernel: sched_clock: Marking stable (1261003876, 240734151)->(1628508043, -126770016) Feb 14 01:09:00.072940 kernel: registered taskstats version 1 Feb 14 01:09:00.072954 kernel: Loading compiled-in X.509 certificates Feb 14 01:09:00.072968 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 14 01:09:00.072980 kernel: Key type .fscrypt registered Feb 14 01:09:00.072993 kernel: Key type fscrypt-provisioning registered Feb 14 01:09:00.073007 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 14 01:09:00.073020 kernel: ima: Allocated hash algorithm: sha1 Feb 14 01:09:00.073038 kernel: ima: No architecture policies found Feb 14 01:09:00.073052 kernel: clk: Disabling unused clocks Feb 14 01:09:00.073065 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 14 01:09:00.073078 kernel: Write protecting the kernel read-only data: 36864k Feb 14 01:09:00.073091 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 14 01:09:00.073105 kernel: Run /init as init process Feb 14 01:09:00.073118 kernel: with arguments: Feb 14 01:09:00.073131 kernel: /init Feb 14 01:09:00.073143 kernel: with environment: Feb 14 01:09:00.073161 kernel: HOME=/ Feb 14 01:09:00.073174 kernel: TERM=linux Feb 14 01:09:00.073188 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 14 01:09:00.073204 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 01:09:00.073221 systemd[1]: Detected virtualization kvm. Feb 14 01:09:00.073235 systemd[1]: Detected architecture x86-64. Feb 14 01:09:00.073248 systemd[1]: Running in initrd. Feb 14 01:09:00.073262 systemd[1]: No hostname configured, using default hostname. Feb 14 01:09:00.073281 systemd[1]: Hostname set to . Feb 14 01:09:00.073295 systemd[1]: Initializing machine ID from VM UUID. Feb 14 01:09:00.073309 systemd[1]: Queued start job for default target initrd.target. Feb 14 01:09:00.073323 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 01:09:00.073337 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 01:09:00.073352 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 14 01:09:00.073366 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 01:09:00.073380 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 14 01:09:00.073400 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 14 01:09:00.073416 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 14 01:09:00.073431 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 14 01:09:00.073498 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 01:09:00.073519 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 01:09:00.073533 systemd[1]: Reached target paths.target - Path Units. Feb 14 01:09:00.073555 systemd[1]: Reached target slices.target - Slice Units. Feb 14 01:09:00.073569 systemd[1]: Reached target swap.target - Swaps. Feb 14 01:09:00.073583 systemd[1]: Reached target timers.target - Timer Units. Feb 14 01:09:00.073598 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 01:09:00.073612 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 01:09:00.073626 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 14 01:09:00.073640 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 14 01:09:00.073655 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 01:09:00.073679 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 01:09:00.073702 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 01:09:00.073717 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 01:09:00.073731 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 14 01:09:00.073745 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 01:09:00.073759 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 14 01:09:00.073773 systemd[1]: Starting systemd-fsck-usr.service... Feb 14 01:09:00.073787 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 01:09:00.073801 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 01:09:00.073815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:09:00.073835 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 14 01:09:00.073887 systemd-journald[200]: Collecting audit messages is disabled. Feb 14 01:09:00.073920 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 01:09:00.073941 systemd[1]: Finished systemd-fsck-usr.service. Feb 14 01:09:00.073957 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 01:09:00.073971 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 14 01:09:00.073985 kernel: Bridge firewalling registered Feb 14 01:09:00.073999 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 01:09:00.074019 systemd-journald[200]: Journal started Feb 14 01:09:00.074046 systemd-journald[200]: Runtime Journal (/run/log/journal/60df5a9cf4774c429f8900e0df4dd81f) is 4.7M, max 38.0M, 33.2M free. Feb 14 01:09:00.017190 systemd-modules-load[201]: Inserted module 'overlay' Feb 14 01:09:00.138902 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 01:09:00.068339 systemd-modules-load[201]: Inserted module 'br_netfilter' Feb 14 01:09:00.139913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:09:00.141345 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 01:09:00.151822 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:09:00.154050 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 01:09:00.164767 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 01:09:00.172658 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 01:09:00.188822 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 01:09:00.191958 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 01:09:00.194398 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:09:00.202637 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 14 01:09:00.203819 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 01:09:00.210659 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 01:09:00.226860 dracut-cmdline[234]: dracut-dracut-053 Feb 14 01:09:00.233036 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 01:09:00.253886 systemd-resolved[236]: Positive Trust Anchors: Feb 14 01:09:00.253909 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 01:09:00.253954 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 01:09:00.263923 systemd-resolved[236]: Defaulting to hostname 'linux'. Feb 14 01:09:00.265534 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 01:09:00.267439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 01:09:00.343495 kernel: SCSI subsystem initialized Feb 14 01:09:00.354512 kernel: Loading iSCSI transport class v2.0-870. Feb 14 01:09:00.367533 kernel: iscsi: registered transport (tcp) Feb 14 01:09:00.393699 kernel: iscsi: registered transport (qla4xxx) Feb 14 01:09:00.393781 kernel: QLogic iSCSI HBA Driver Feb 14 01:09:00.450276 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 14 01:09:00.456699 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 14 01:09:00.490799 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 14 01:09:00.490888 kernel: device-mapper: uevent: version 1.0.3 Feb 14 01:09:00.493292 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 14 01:09:00.568547 kernel: raid6: sse2x4 gen() 13886 MB/s Feb 14 01:09:00.568644 kernel: raid6: sse2x2 gen() 9597 MB/s Feb 14 01:09:00.577098 kernel: raid6: sse2x1 gen() 10182 MB/s Feb 14 01:09:00.577142 kernel: raid6: using algorithm sse2x4 gen() 13886 MB/s Feb 14 01:09:00.596105 kernel: raid6: .... xor() 7804 MB/s, rmw enabled Feb 14 01:09:00.596155 kernel: raid6: using ssse3x2 recovery algorithm Feb 14 01:09:00.622484 kernel: xor: automatically using best checksumming function avx Feb 14 01:09:00.819495 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 14 01:09:00.833502 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 14 01:09:00.840640 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 01:09:00.866955 systemd-udevd[419]: Using default interface naming scheme 'v255'. Feb 14 01:09:00.873879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 01:09:00.882652 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 14 01:09:00.906810 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Feb 14 01:09:00.946824 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 01:09:00.955728 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 01:09:01.061545 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 01:09:01.070842 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 14 01:09:01.092133 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 14 01:09:01.099847 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 01:09:01.101985 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 01:09:01.102758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 01:09:01.111830 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 14 01:09:01.140012 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 14 01:09:01.177472 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 14 01:09:01.245916 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 14 01:09:01.246132 kernel: cryptd: max_cpu_qlen set to 1000 Feb 14 01:09:01.246155 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 14 01:09:01.246173 kernel: GPT:17805311 != 125829119 Feb 14 01:09:01.246190 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 14 01:09:01.246207 kernel: GPT:17805311 != 125829119 Feb 14 01:09:01.246224 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 14 01:09:01.246241 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 01:09:01.246265 kernel: AVX version of gcm_enc/dec engaged. Feb 14 01:09:01.234209 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 01:09:01.250923 kernel: AES CTR mode by8 optimization enabled Feb 14 01:09:01.234459 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:09:01.235473 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:09:01.236241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 01:09:01.236421 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:09:01.237201 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:09:01.247804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:09:01.262956 kernel: ACPI: bus type USB registered Feb 14 01:09:01.262994 kernel: usbcore: registered new interface driver usbfs Feb 14 01:09:01.264469 kernel: usbcore: registered new interface driver hub Feb 14 01:09:01.265914 kernel: usbcore: registered new device driver usb Feb 14 01:09:01.299482 kernel: libata version 3.00 loaded. Feb 14 01:09:01.315731 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 14 01:09:01.324366 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 14 01:09:01.324636 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 14 01:09:01.324915 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 14 01:09:01.325118 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 14 01:09:01.325331 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 14 01:09:01.325606 kernel: hub 1-0:1.0: USB hub found Feb 14 01:09:01.325854 kernel: hub 1-0:1.0: 4 ports detected Feb 14 01:09:01.326048 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 14 01:09:01.326263 kernel: hub 2-0:1.0: USB hub found Feb 14 01:09:01.329261 kernel: hub 2-0:1.0: 4 ports detected Feb 14 01:09:01.329507 kernel: ahci 0000:00:1f.2: version 3.0 Feb 14 01:09:01.359154 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 14 01:09:01.359190 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 14 01:09:01.359406 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 14 01:09:01.359672 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (466) Feb 14 01:09:01.359695 kernel: scsi host0: ahci Feb 14 01:09:01.359903 kernel: scsi host1: ahci Feb 14 01:09:01.360221 kernel: scsi host2: ahci Feb 14 01:09:01.360428 kernel: scsi host3: ahci Feb 14 01:09:01.360637 kernel: scsi host4: ahci Feb 14 01:09:01.360841 kernel: scsi host5: ahci Feb 14 01:09:01.361045 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Feb 14 01:09:01.361067 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Feb 14 01:09:01.361084 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Feb 14 01:09:01.361101 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Feb 14 01:09:01.361126 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Feb 14 01:09:01.361144 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Feb 14 01:09:01.384708 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 14 01:09:01.443517 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (482) Feb 14 01:09:01.449782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:09:01.457527 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 14 01:09:01.463810 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 14 01:09:01.464710 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 14 01:09:01.478039 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 14 01:09:01.484742 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 14 01:09:01.488628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:09:01.496525 disk-uuid[563]: Primary Header is updated. Feb 14 01:09:01.496525 disk-uuid[563]: Secondary Entries is updated. Feb 14 01:09:01.496525 disk-uuid[563]: Secondary Header is updated. Feb 14 01:09:01.504475 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 01:09:01.510800 kernel: GPT:disk_guids don't match. Feb 14 01:09:01.510856 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 14 01:09:01.513100 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 01:09:01.522798 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:09:01.527528 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 01:09:01.560722 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 14 01:09:01.664722 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 14 01:09:01.664803 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 14 01:09:01.667719 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 14 01:09:01.667769 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 14 01:09:01.670293 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 14 01:09:01.674520 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 14 01:09:01.711479 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 14 01:09:01.719064 kernel: usbcore: registered new interface driver usbhid Feb 14 01:09:01.719103 kernel: usbhid: USB HID core driver Feb 14 01:09:01.727483 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 14 01:09:01.736506 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 14 01:09:02.522158 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 01:09:02.522682 disk-uuid[564]: The operation has completed successfully. Feb 14 01:09:02.572475 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 14 01:09:02.572687 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 14 01:09:02.597748 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 14 01:09:02.605023 sh[588]: Success Feb 14 01:09:02.621469 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 14 01:09:02.684686 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 14 01:09:02.695600 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 14 01:09:02.697604 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 14 01:09:02.721779 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 14 01:09:02.721851 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 14 01:09:02.721878 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 14 01:09:02.724718 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 14 01:09:02.727614 kernel: BTRFS info (device dm-0): using free space tree Feb 14 01:09:02.736882 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 14 01:09:02.738364 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 14 01:09:02.744740 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 14 01:09:02.751772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 14 01:09:02.772759 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 01:09:02.772832 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 01:09:02.772854 kernel: BTRFS info (device vda6): using free space tree Feb 14 01:09:02.779474 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 01:09:02.792094 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 14 01:09:02.795943 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 01:09:02.802168 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 14 01:09:02.810869 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 14 01:09:02.914896 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 01:09:02.924669 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 01:09:02.963081 ignition[686]: Ignition 2.19.0 Feb 14 01:09:02.963105 ignition[686]: Stage: fetch-offline Feb 14 01:09:02.963185 ignition[686]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:09:02.967036 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 01:09:02.963205 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:09:02.963378 ignition[686]: parsed url from cmdline: "" Feb 14 01:09:02.963385 ignition[686]: no config URL provided Feb 14 01:09:02.963394 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 01:09:02.963410 ignition[686]: no config at "/usr/lib/ignition/user.ign" Feb 14 01:09:02.963419 ignition[686]: failed to fetch config: resource requires networking Feb 14 01:09:02.963700 ignition[686]: Ignition finished successfully Feb 14 01:09:02.974493 systemd-networkd[771]: lo: Link UP Feb 14 01:09:02.974499 systemd-networkd[771]: lo: Gained carrier Feb 14 01:09:02.976715 systemd-networkd[771]: Enumeration completed Feb 14 01:09:02.977207 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 01:09:02.977213 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:09:02.978314 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 01:09:02.979390 systemd-networkd[771]: eth0: Link UP Feb 14 01:09:02.979396 systemd-networkd[771]: eth0: Gained carrier Feb 14 01:09:02.979407 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 01:09:02.980750 systemd[1]: Reached target network.target - Network. Feb 14 01:09:02.991698 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 14 01:09:03.008565 systemd-networkd[771]: eth0: DHCPv4 address 10.230.17.130/30, gateway 10.230.17.129 acquired from 10.230.17.129 Feb 14 01:09:03.016441 ignition[778]: Ignition 2.19.0 Feb 14 01:09:03.016487 ignition[778]: Stage: fetch Feb 14 01:09:03.016738 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:09:03.016758 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:09:03.016877 ignition[778]: parsed url from cmdline: "" Feb 14 01:09:03.016884 ignition[778]: no config URL provided Feb 14 01:09:03.016893 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 01:09:03.016908 ignition[778]: no config at "/usr/lib/ignition/user.ign" Feb 14 01:09:03.017123 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 14 01:09:03.017180 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 14 01:09:03.017231 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 14 01:09:03.033803 ignition[778]: GET result: OK Feb 14 01:09:03.034320 ignition[778]: parsing config with SHA512: 2f34eb685c1898c9b769bfef52c35c5ce49814f7aefdca71870837d4f15c49998e1bfd48a46637d024757336705949f40d17a9da2ca7031006332e184cc0bf18 Feb 14 01:09:03.040268 unknown[778]: fetched base config from "system" Feb 14 01:09:03.040287 unknown[778]: fetched base config from "system" Feb 14 01:09:03.040766 ignition[778]: fetch: fetch complete Feb 14 01:09:03.040295 unknown[778]: fetched user config from "openstack" Feb 14 01:09:03.040775 ignition[778]: fetch: fetch passed Feb 14 01:09:03.043493 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 14 01:09:03.040838 ignition[778]: Ignition finished successfully Feb 14 01:09:03.059787 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 14 01:09:03.079697 ignition[785]: Ignition 2.19.0 Feb 14 01:09:03.080728 ignition[785]: Stage: kargs Feb 14 01:09:03.080965 ignition[785]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:09:03.080985 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:09:03.084054 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 14 01:09:03.082104 ignition[785]: kargs: kargs passed Feb 14 01:09:03.082188 ignition[785]: Ignition finished successfully Feb 14 01:09:03.090668 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 14 01:09:03.109070 ignition[791]: Ignition 2.19.0 Feb 14 01:09:03.109095 ignition[791]: Stage: disks Feb 14 01:09:03.109395 ignition[791]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:09:03.109416 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:09:03.113053 ignition[791]: disks: disks passed Feb 14 01:09:03.113126 ignition[791]: Ignition finished successfully Feb 14 01:09:03.116583 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 14 01:09:03.118063 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 14 01:09:03.118863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 14 01:09:03.120535 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 01:09:03.122250 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 01:09:03.124049 systemd[1]: Reached target basic.target - Basic System. Feb 14 01:09:03.135713 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 14 01:09:03.153572 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 14 01:09:03.157066 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 14 01:09:03.164583 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 14 01:09:03.287499 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 14 01:09:03.288516 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 14 01:09:03.289873 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 14 01:09:03.297557 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 01:09:03.300576 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 14 01:09:03.302339 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 14 01:09:03.305712 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 14 01:09:03.306644 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 14 01:09:03.306737 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 01:09:03.316468 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (807) Feb 14 01:09:03.319810 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 14 01:09:03.325356 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 01:09:03.325384 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 01:09:03.325413 kernel: BTRFS info (device vda6): using free space tree Feb 14 01:09:03.330660 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 14 01:09:03.338535 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 01:09:03.340845 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 01:09:03.405872 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Feb 14 01:09:03.413536 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Feb 14 01:09:03.419435 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Feb 14 01:09:03.425880 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Feb 14 01:09:03.529379 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 14 01:09:03.535579 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 14 01:09:03.539668 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 14 01:09:03.551508 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 01:09:03.578068 ignition[924]: INFO : Ignition 2.19.0 Feb 14 01:09:03.578068 ignition[924]: INFO : Stage: mount Feb 14 01:09:03.579833 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:09:03.579833 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:09:03.583737 ignition[924]: INFO : mount: mount passed Feb 14 01:09:03.583737 ignition[924]: INFO : Ignition finished successfully Feb 14 01:09:03.584558 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 14 01:09:03.586215 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 14 01:09:03.718385 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 14 01:09:04.697201 systemd-networkd[771]: eth0: Gained IPv6LL Feb 14 01:09:06.206932 systemd-networkd[771]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8460:24:19ff:fee6:1182/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8460:24:19ff:fee6:1182/64 assigned by NDisc. Feb 14 01:09:06.206952 systemd-networkd[771]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 14 01:09:10.478598 coreos-metadata[809]: Feb 14 01:09:10.478 WARN failed to locate config-drive, using the metadata service API instead Feb 14 01:09:10.503198 coreos-metadata[809]: Feb 14 01:09:10.503 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 14 01:09:10.520983 coreos-metadata[809]: Feb 14 01:09:10.520 INFO Fetch successful Feb 14 01:09:10.521865 coreos-metadata[809]: Feb 14 01:09:10.521 INFO wrote hostname srv-krhnz.gb1.brightbox.com to /sysroot/etc/hostname Feb 14 01:09:10.523876 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 14 01:09:10.524041 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 14 01:09:10.542747 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 14 01:09:10.553080 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 01:09:10.575504 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Feb 14 01:09:10.579037 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 01:09:10.579082 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 01:09:10.580854 kernel: BTRFS info (device vda6): using free space tree Feb 14 01:09:10.586487 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 01:09:10.589764 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 01:09:10.620163 ignition[958]: INFO : Ignition 2.19.0 Feb 14 01:09:10.621346 ignition[958]: INFO : Stage: files Feb 14 01:09:10.622319 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:09:10.624515 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:09:10.624515 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Feb 14 01:09:10.627769 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 14 01:09:10.628938 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 14 01:09:10.633993 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 14 01:09:10.635442 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 14 01:09:10.637038 unknown[958]: wrote ssh authorized keys file for user: core Feb 14 01:09:10.638260 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 14 01:09:10.640698 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 14 01:09:10.642138 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 14 01:09:10.642138 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 14 01:09:10.642138 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 14 01:09:10.839630 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 14 01:09:11.206985 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 14 01:09:11.206985 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 14 01:09:11.215551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 14 01:09:11.823899 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 14 01:09:17.533601 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 14 01:09:17.533601 ignition[958]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 14 01:09:17.538406 ignition[958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 14 01:09:17.538406 ignition[958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 14 01:09:17.538406 ignition[958]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 14 01:09:17.538406 ignition[958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 14 01:09:17.538406 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 01:09:17.538406 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 01:09:17.538406 ignition[958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 14 01:09:17.538406 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 14 01:09:17.538406 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 14 01:09:17.538406 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 14 01:09:17.559927 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 14 01:09:17.559927 ignition[958]: INFO : files: files passed Feb 14 01:09:17.559927 ignition[958]: INFO : Ignition finished successfully Feb 14 01:09:17.550721 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 14 01:09:17.565910 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 14 01:09:17.571953 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 14 01:09:17.578903 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 14 01:09:17.579149 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 14 01:09:17.600223 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:09:17.600223 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:09:17.602952 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:09:17.604793 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 01:09:17.606354 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 14 01:09:17.613788 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 14 01:09:17.665508 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 14 01:09:17.665699 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 14 01:09:17.667963 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 14 01:09:17.670184 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 14 01:09:17.671683 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 14 01:09:17.683710 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 14 01:09:17.702142 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 01:09:17.706664 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 14 01:09:17.727736 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 14 01:09:17.729799 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 01:09:17.731783 systemd[1]: Stopped target timers.target - Timer Units. Feb 14 01:09:17.732660 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 14 01:09:17.732858 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 01:09:17.734966 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 14 01:09:17.736036 systemd[1]: Stopped target basic.target - Basic System. Feb 14 01:09:17.737693 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 14 01:09:17.739417 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 01:09:17.740897 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 14 01:09:17.742561 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 14 01:09:17.744207 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 01:09:17.745988 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 14 01:09:17.747611 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 14 01:09:17.749352 systemd[1]: Stopped target swap.target - Swaps. Feb 14 01:09:17.750762 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 14 01:09:17.750983 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 14 01:09:17.753064 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 14 01:09:17.754131 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 01:09:17.755612 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 14 01:09:17.755980 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 01:09:17.757171 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 14 01:09:17.757377 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 14 01:09:17.759592 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 14 01:09:17.759767 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 01:09:17.761490 systemd[1]: ignition-files.service: Deactivated successfully. Feb 14 01:09:17.761692 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 14 01:09:17.774267 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 14 01:09:17.781015 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 14 01:09:17.781410 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 01:09:17.786505 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 14 01:09:17.787262 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 14 01:09:17.788585 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 01:09:17.791622 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 14 01:09:17.791886 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 01:09:17.803861 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 14 01:09:17.804049 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 14 01:09:17.812420 ignition[1012]: INFO : Ignition 2.19.0 Feb 14 01:09:17.812420 ignition[1012]: INFO : Stage: umount Feb 14 01:09:17.814232 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:09:17.814232 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:09:17.816174 ignition[1012]: INFO : umount: umount passed Feb 14 01:09:17.816174 ignition[1012]: INFO : Ignition finished successfully Feb 14 01:09:17.817929 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 14 01:09:17.818132 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 14 01:09:17.821831 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 14 01:09:17.821947 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 14 01:09:17.825743 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 14 01:09:17.825821 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 14 01:09:17.827932 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 14 01:09:17.828011 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 14 01:09:17.829415 systemd[1]: Stopped target network.target - Network. Feb 14 01:09:17.832537 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 14 01:09:17.832630 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 01:09:17.834106 systemd[1]: Stopped target paths.target - Path Units. Feb 14 01:09:17.838868 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 14 01:09:17.842584 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 01:09:17.844416 systemd[1]: Stopped target slices.target - Slice Units. Feb 14 01:09:17.845121 systemd[1]: Stopped target sockets.target - Socket Units. Feb 14 01:09:17.846637 systemd[1]: iscsid.socket: Deactivated successfully. Feb 14 01:09:17.846714 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 01:09:17.848052 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 14 01:09:17.848113 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 01:09:17.849466 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 14 01:09:17.849549 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 14 01:09:17.850890 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 14 01:09:17.850968 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 14 01:09:17.852615 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 14 01:09:17.855031 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 14 01:09:17.857787 systemd-networkd[771]: eth0: DHCPv6 lease lost Feb 14 01:09:17.859893 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 14 01:09:17.860884 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 14 01:09:17.861027 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 14 01:09:17.862646 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 14 01:09:17.862828 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 14 01:09:17.866284 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 14 01:09:17.866538 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 14 01:09:17.872687 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 14 01:09:17.873018 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 14 01:09:17.874331 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 14 01:09:17.874421 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 14 01:09:17.881586 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 14 01:09:17.882839 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 14 01:09:17.882918 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 01:09:17.883820 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 14 01:09:17.883899 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 14 01:09:17.886641 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 14 01:09:17.886715 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 14 01:09:17.888324 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 14 01:09:17.888407 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 01:09:17.890091 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 01:09:17.914534 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 14 01:09:17.914786 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 01:09:17.916895 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 14 01:09:17.917057 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 14 01:09:17.919894 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 14 01:09:17.920016 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 14 01:09:17.921435 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 14 01:09:17.921529 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 01:09:17.922991 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 14 01:09:17.923069 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 14 01:09:17.925413 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 14 01:09:17.925505 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 14 01:09:17.927004 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 01:09:17.927091 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:09:17.939789 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 14 01:09:17.942513 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 14 01:09:17.942620 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 01:09:17.945599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 01:09:17.945676 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:09:17.953914 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 14 01:09:17.954113 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 14 01:09:17.956259 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 14 01:09:17.972701 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 14 01:09:17.983410 systemd[1]: Switching root. Feb 14 01:09:18.022165 systemd-journald[200]: Journal stopped Feb 14 01:09:19.639997 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Feb 14 01:09:19.640242 kernel: SELinux: policy capability network_peer_controls=1 Feb 14 01:09:19.640288 kernel: SELinux: policy capability open_perms=1 Feb 14 01:09:19.640322 kernel: SELinux: policy capability extended_socket_class=1 Feb 14 01:09:19.640352 kernel: SELinux: policy capability always_check_network=0 Feb 14 01:09:19.640383 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 14 01:09:19.640408 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 14 01:09:19.640510 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 14 01:09:19.640543 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 14 01:09:19.640570 kernel: audit: type=1403 audit(1739495358.443:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 14 01:09:19.640649 systemd[1]: Successfully loaded SELinux policy in 56.406ms. Feb 14 01:09:19.640723 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.304ms. Feb 14 01:09:19.640754 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 01:09:19.646519 systemd[1]: Detected virtualization kvm. Feb 14 01:09:19.646591 systemd[1]: Detected architecture x86-64. Feb 14 01:09:19.646638 systemd[1]: Detected first boot. Feb 14 01:09:19.646670 systemd[1]: Hostname set to . Feb 14 01:09:19.646706 systemd[1]: Initializing machine ID from VM UUID. Feb 14 01:09:19.646734 zram_generator::config[1073]: No configuration found. Feb 14 01:09:19.646775 systemd[1]: Populated /etc with preset unit settings. Feb 14 01:09:19.646797 systemd[1]: Queued start job for default target multi-user.target. Feb 14 01:09:19.646836 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 14 01:09:19.646878 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 14 01:09:19.646916 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 14 01:09:19.646942 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 14 01:09:19.646962 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 14 01:09:19.646988 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 14 01:09:19.647021 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 14 01:09:19.647053 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 14 01:09:19.647075 systemd[1]: Created slice user.slice - User and Session Slice. Feb 14 01:09:19.647102 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 01:09:19.647125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 01:09:19.647160 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 14 01:09:19.647189 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 14 01:09:19.647211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 14 01:09:19.647238 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 01:09:19.647259 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 14 01:09:19.647285 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 01:09:19.647320 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 14 01:09:19.647342 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 01:09:19.647390 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 01:09:19.647425 systemd[1]: Reached target slices.target - Slice Units. Feb 14 01:09:19.651548 systemd[1]: Reached target swap.target - Swaps. Feb 14 01:09:19.651604 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 14 01:09:19.651914 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 14 01:09:19.652125 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 14 01:09:19.652151 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 14 01:09:19.652171 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 01:09:19.652192 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 01:09:19.652413 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 01:09:19.652441 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 14 01:09:19.660608 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 14 01:09:19.660638 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 14 01:09:19.660660 systemd[1]: Mounting media.mount - External Media Directory... Feb 14 01:09:19.660708 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:09:19.660787 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 14 01:09:19.660843 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 14 01:09:19.660875 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 14 01:09:19.660943 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 14 01:09:19.660989 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 01:09:19.661011 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 01:09:19.661032 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 14 01:09:19.661067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 01:09:19.661142 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 01:09:19.661220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 01:09:19.661262 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 14 01:09:19.661381 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 01:09:19.661417 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 14 01:09:19.661536 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 14 01:09:19.661575 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 14 01:09:19.661619 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 01:09:19.661724 kernel: fuse: init (API version 7.39) Feb 14 01:09:19.661751 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 01:09:19.661772 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 14 01:09:19.661792 kernel: loop: module loaded Feb 14 01:09:19.661833 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 14 01:09:19.662012 systemd-journald[1165]: Collecting audit messages is disabled. Feb 14 01:09:19.662091 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 01:09:19.662132 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:09:19.662155 systemd-journald[1165]: Journal started Feb 14 01:09:19.662284 systemd-journald[1165]: Runtime Journal (/run/log/journal/60df5a9cf4774c429f8900e0df4dd81f) is 4.7M, max 38.0M, 33.2M free. Feb 14 01:09:19.677110 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 14 01:09:19.677416 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 14 01:09:19.686613 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 01:09:19.689116 systemd[1]: Mounted media.mount - External Media Directory. Feb 14 01:09:19.692341 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 14 01:09:19.693705 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 14 01:09:19.695067 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 14 01:09:19.697397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 01:09:19.700271 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 14 01:09:19.700568 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 14 01:09:19.706534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 01:09:19.706822 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 01:09:19.708089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 01:09:19.708352 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 01:09:19.709576 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 14 01:09:19.709876 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 14 01:09:19.714744 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 01:09:19.715113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 01:09:19.720019 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 01:09:19.721975 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 14 01:09:19.726206 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 14 01:09:19.776469 kernel: ACPI: bus type drm_connector registered Feb 14 01:09:19.777515 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 14 01:09:19.794609 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 14 01:09:19.807866 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 14 01:09:19.814726 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 14 01:09:19.831775 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 14 01:09:19.841808 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 14 01:09:19.844559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 01:09:19.853691 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 14 01:09:19.864760 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 01:09:19.875906 systemd-journald[1165]: Time spent on flushing to /var/log/journal/60df5a9cf4774c429f8900e0df4dd81f is 64.443ms for 1122 entries. Feb 14 01:09:19.875906 systemd-journald[1165]: System Journal (/var/log/journal/60df5a9cf4774c429f8900e0df4dd81f) is 8.0M, max 584.8M, 576.8M free. Feb 14 01:09:19.984731 systemd-journald[1165]: Received client request to flush runtime journal. Feb 14 01:09:19.876183 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 01:09:19.891661 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 01:09:19.903246 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 14 01:09:19.906378 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 01:09:19.907762 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 01:09:19.920666 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 14 01:09:19.921764 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 14 01:09:19.924125 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 14 01:09:19.931345 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 14 01:09:19.947941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 01:09:19.991432 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 14 01:09:20.006322 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 01:09:20.018760 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 14 01:09:20.019953 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Feb 14 01:09:20.019981 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Feb 14 01:09:20.035142 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 01:09:20.040667 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 14 01:09:20.068774 udevadm[1243]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 14 01:09:20.101427 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 14 01:09:20.109756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 01:09:20.134796 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Feb 14 01:09:20.134829 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Feb 14 01:09:20.144058 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 01:09:20.646229 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 14 01:09:20.663788 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 01:09:20.696839 systemd-udevd[1256]: Using default interface naming scheme 'v255'. Feb 14 01:09:20.726799 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 01:09:20.737937 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 01:09:20.758676 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 14 01:09:20.835210 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 14 01:09:20.865822 kernel: mousedev: PS/2 mouse device common for all mice Feb 14 01:09:20.893475 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 14 01:09:20.899968 kernel: ACPI: button: Power Button [PWRF] Feb 14 01:09:20.898160 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 14 01:09:20.921491 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1270) Feb 14 01:09:21.046584 systemd-networkd[1261]: lo: Link UP Feb 14 01:09:21.046597 systemd-networkd[1261]: lo: Gained carrier Feb 14 01:09:21.049854 systemd-networkd[1261]: Enumeration completed Feb 14 01:09:21.051193 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 01:09:21.052404 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 01:09:21.055567 systemd-networkd[1261]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:09:21.060790 systemd-networkd[1261]: eth0: Link UP Feb 14 01:09:21.060888 systemd-networkd[1261]: eth0: Gained carrier Feb 14 01:09:21.061014 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 01:09:21.061233 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 14 01:09:21.077661 systemd-networkd[1261]: eth0: DHCPv4 address 10.230.17.130/30, gateway 10.230.17.129 acquired from 10.230.17.129 Feb 14 01:09:21.083680 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 14 01:09:21.087722 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 01:09:21.153476 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 14 01:09:21.163712 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 14 01:09:21.163998 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 14 01:09:21.221246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:09:21.240801 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 14 01:09:21.388226 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 14 01:09:21.421951 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 14 01:09:21.423645 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:09:21.442636 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 01:09:21.478192 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 14 01:09:21.480215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 01:09:21.486721 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 14 01:09:21.508807 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 01:09:21.541993 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 14 01:09:21.543741 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 14 01:09:21.544724 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 14 01:09:21.544883 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 01:09:21.545892 systemd[1]: Reached target machines.target - Containers. Feb 14 01:09:21.548572 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 14 01:09:21.554651 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 14 01:09:21.557564 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 14 01:09:21.560548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 01:09:21.565609 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 14 01:09:21.572663 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 14 01:09:21.579625 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 14 01:09:21.590610 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 14 01:09:21.616047 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 14 01:09:21.632488 kernel: loop0: detected capacity change from 0 to 142488 Feb 14 01:09:21.642371 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 14 01:09:21.645609 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 14 01:09:21.669801 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 14 01:09:21.696538 kernel: loop1: detected capacity change from 0 to 140768 Feb 14 01:09:21.742507 kernel: loop2: detected capacity change from 0 to 210664 Feb 14 01:09:21.780500 kernel: loop3: detected capacity change from 0 to 8 Feb 14 01:09:21.800492 kernel: loop4: detected capacity change from 0 to 142488 Feb 14 01:09:21.836516 kernel: loop5: detected capacity change from 0 to 140768 Feb 14 01:09:21.870523 kernel: loop6: detected capacity change from 0 to 210664 Feb 14 01:09:21.899478 kernel: loop7: detected capacity change from 0 to 8 Feb 14 01:09:21.901063 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 14 01:09:21.902471 (sd-merge)[1321]: Merged extensions into '/usr'. Feb 14 01:09:21.908557 systemd[1]: Reloading requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Feb 14 01:09:21.908598 systemd[1]: Reloading... Feb 14 01:09:22.009513 zram_generator::config[1349]: No configuration found. Feb 14 01:09:22.203358 ldconfig[1303]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 14 01:09:22.221336 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:09:22.233002 systemd-networkd[1261]: eth0: Gained IPv6LL Feb 14 01:09:22.305624 systemd[1]: Reloading finished in 396 ms. Feb 14 01:09:22.328198 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 14 01:09:22.331197 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 14 01:09:22.332413 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 14 01:09:22.343763 systemd[1]: Starting ensure-sysext.service... Feb 14 01:09:22.347663 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 01:09:22.362605 systemd[1]: Reloading requested from client PID 1414 ('systemctl') (unit ensure-sysext.service)... Feb 14 01:09:22.362637 systemd[1]: Reloading... Feb 14 01:09:22.401540 systemd-tmpfiles[1415]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 14 01:09:22.403015 systemd-tmpfiles[1415]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 14 01:09:22.405989 systemd-tmpfiles[1415]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 14 01:09:22.406466 systemd-tmpfiles[1415]: ACLs are not supported, ignoring. Feb 14 01:09:22.406965 systemd-tmpfiles[1415]: ACLs are not supported, ignoring. Feb 14 01:09:22.413560 systemd-tmpfiles[1415]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 01:09:22.413867 systemd-tmpfiles[1415]: Skipping /boot Feb 14 01:09:22.433081 systemd-tmpfiles[1415]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 01:09:22.433275 systemd-tmpfiles[1415]: Skipping /boot Feb 14 01:09:22.465472 zram_generator::config[1448]: No configuration found. Feb 14 01:09:22.650001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:09:22.733605 systemd[1]: Reloading finished in 370 ms. Feb 14 01:09:22.761347 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 01:09:22.773871 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 01:09:22.781718 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 14 01:09:22.788855 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 14 01:09:22.808689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 01:09:22.821786 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 14 01:09:22.840330 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:09:22.841174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 01:09:22.848021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 01:09:22.863150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 01:09:22.890535 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 01:09:22.892065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 01:09:22.893636 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:09:22.895290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 01:09:22.898613 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 01:09:22.902867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 01:09:22.903944 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 01:09:22.915285 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 01:09:22.916207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 01:09:22.922047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:09:22.922619 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 01:09:22.928956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 01:09:22.941553 augenrules[1532]: No rules Feb 14 01:09:22.937865 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 01:09:22.949765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 01:09:22.954759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 01:09:22.955094 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:09:22.963029 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 01:09:22.970379 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 14 01:09:22.972149 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 14 01:09:22.973763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 01:09:22.973997 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 01:09:22.975852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 01:09:22.980372 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 01:09:22.985037 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 01:09:22.990255 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 01:09:22.995426 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 14 01:09:23.008004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:09:23.008594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 01:09:23.014890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 01:09:23.025543 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 01:09:23.032869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 01:09:23.039696 systemd-resolved[1516]: Positive Trust Anchors: Feb 14 01:09:23.039709 systemd-resolved[1516]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 01:09:23.039768 systemd-resolved[1516]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 01:09:23.045783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 01:09:23.048757 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 01:09:23.050109 systemd-resolved[1516]: Using system hostname 'srv-krhnz.gb1.brightbox.com'. Feb 14 01:09:23.061826 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 14 01:09:23.063324 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 14 01:09:23.063543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:09:23.066904 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 01:09:23.069611 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 01:09:23.069906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 01:09:23.072221 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 01:09:23.072637 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 01:09:23.075191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 01:09:23.075465 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 01:09:23.077367 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 01:09:23.077788 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 01:09:23.081061 systemd[1]: Finished ensure-sysext.service. Feb 14 01:09:23.088947 systemd[1]: Reached target network.target - Network. Feb 14 01:09:23.089839 systemd[1]: Reached target network-online.target - Network is Online. Feb 14 01:09:23.090831 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 01:09:23.091800 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 01:09:23.092016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 01:09:23.099653 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 14 01:09:23.103215 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 14 01:09:23.182274 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 14 01:09:23.183724 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 01:09:23.184714 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 14 01:09:23.185716 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 14 01:09:23.186699 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 14 01:09:23.187563 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 14 01:09:23.187604 systemd[1]: Reached target paths.target - Path Units. Feb 14 01:09:23.188270 systemd[1]: Reached target time-set.target - System Time Set. Feb 14 01:09:23.189228 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 14 01:09:23.190124 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 14 01:09:23.191186 systemd[1]: Reached target timers.target - Timer Units. Feb 14 01:09:23.193553 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 14 01:09:23.196574 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 14 01:09:23.199442 systemd-networkd[1261]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8460:24:19ff:fee6:1182/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8460:24:19ff:fee6:1182/64 assigned by NDisc. Feb 14 01:09:23.199480 systemd-networkd[1261]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 14 01:09:23.199684 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 14 01:09:23.201614 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 14 01:09:23.202394 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 01:09:23.203138 systemd[1]: Reached target basic.target - Basic System. Feb 14 01:09:23.204090 systemd[1]: System is tainted: cgroupsv1 Feb 14 01:09:23.204154 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 14 01:09:23.204196 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 14 01:09:23.207305 systemd[1]: Starting containerd.service - containerd container runtime... Feb 14 01:09:23.210647 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 14 01:09:23.219629 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 14 01:09:23.228555 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 14 01:09:23.246657 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 14 01:09:23.247653 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 14 01:09:23.254311 jq[1586]: false Feb 14 01:09:23.256539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:09:23.277678 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 14 01:09:23.279007 dbus-daemon[1584]: [system] SELinux support is enabled Feb 14 01:09:23.281464 dbus-daemon[1584]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1261 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 14 01:09:23.293937 systemd-timesyncd[1577]: Contacted time server 51.89.151.183:123 (0.flatcar.pool.ntp.org). Feb 14 01:09:23.294010 systemd-timesyncd[1577]: Initial clock synchronization to Fri 2025-02-14 01:09:23.220393 UTC. Feb 14 01:09:23.294657 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 14 01:09:23.297575 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 14 01:09:23.307314 extend-filesystems[1589]: Found loop4 Feb 14 01:09:23.310095 extend-filesystems[1589]: Found loop5 Feb 14 01:09:23.310095 extend-filesystems[1589]: Found loop6 Feb 14 01:09:23.310095 extend-filesystems[1589]: Found loop7 Feb 14 01:09:23.310095 extend-filesystems[1589]: Found vda Feb 14 01:09:23.310095 extend-filesystems[1589]: Found vda1 Feb 14 01:09:23.310095 extend-filesystems[1589]: Found vda2 Feb 14 01:09:23.310095 extend-filesystems[1589]: Found vda3 Feb 14 01:09:23.310095 extend-filesystems[1589]: Found usr Feb 14 01:09:23.310095 extend-filesystems[1589]: Found vda4 Feb 14 01:09:23.310095 extend-filesystems[1589]: Found vda6 Feb 14 01:09:23.310095 extend-filesystems[1589]: Found vda7 Feb 14 01:09:23.310095 extend-filesystems[1589]: Found vda9 Feb 14 01:09:23.310095 extend-filesystems[1589]: Checking size of /dev/vda9 Feb 14 01:09:23.366696 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 14 01:09:23.307654 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 14 01:09:23.367352 extend-filesystems[1589]: Resized partition /dev/vda9 Feb 14 01:09:23.323181 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 14 01:09:23.369437 extend-filesystems[1614]: resize2fs 1.47.1 (20-May-2024) Feb 14 01:09:23.371795 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 14 01:09:23.375131 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 14 01:09:23.387823 systemd[1]: Starting update-engine.service - Update Engine... Feb 14 01:09:23.396607 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 14 01:09:23.409516 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1260) Feb 14 01:09:23.405339 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 14 01:09:23.419019 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 14 01:09:23.420678 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 14 01:09:23.422942 systemd[1]: motdgen.service: Deactivated successfully. Feb 14 01:09:23.423364 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 14 01:09:23.430031 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 14 01:09:23.442042 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 14 01:09:23.444527 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 14 01:09:23.470953 jq[1623]: true Feb 14 01:09:23.488925 (ntainerd)[1629]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 14 01:09:23.512332 dbus-daemon[1584]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 14 01:09:23.519676 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 14 01:09:23.520010 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 14 01:09:23.536508 update_engine[1622]: I20250214 01:09:23.534754 1622 main.cc:92] Flatcar Update Engine starting Feb 14 01:09:23.535768 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 14 01:09:23.540688 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 14 01:09:23.540749 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 14 01:09:23.558481 systemd[1]: Started update-engine.service - Update Engine. Feb 14 01:09:23.562038 update_engine[1622]: I20250214 01:09:23.561759 1622 update_check_scheduler.cc:74] Next update check in 11m24s Feb 14 01:09:23.563498 jq[1633]: true Feb 14 01:09:23.567078 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 14 01:09:23.573777 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 14 01:09:23.601580 tar[1627]: linux-amd64/helm Feb 14 01:09:23.826478 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 14 01:09:23.880003 systemd-logind[1617]: Watching system buttons on /dev/input/event2 (Power Button) Feb 14 01:09:23.880078 systemd-logind[1617]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 14 01:09:23.880826 bash[1661]: Updated "/home/core/.ssh/authorized_keys" Feb 14 01:09:23.888757 systemd-logind[1617]: New seat seat0. Feb 14 01:09:23.897074 extend-filesystems[1614]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 14 01:09:23.897074 extend-filesystems[1614]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 14 01:09:23.897074 extend-filesystems[1614]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 14 01:09:23.890957 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 14 01:09:23.947597 extend-filesystems[1589]: Resized filesystem in /dev/vda9 Feb 14 01:09:23.900978 systemd[1]: Started systemd-logind.service - User Login Management. Feb 14 01:09:23.904012 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 14 01:09:23.904405 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 14 01:09:23.924814 systemd[1]: Starting sshkeys.service... Feb 14 01:09:23.992433 dbus-daemon[1584]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 14 01:09:23.994290 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 14 01:09:23.996657 dbus-daemon[1584]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1643 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 14 01:09:23.997853 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 14 01:09:24.008607 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 14 01:09:24.020622 systemd[1]: Starting polkit.service - Authorization Manager... Feb 14 01:09:24.049096 polkitd[1683]: Started polkitd version 121 Feb 14 01:09:24.061588 polkitd[1683]: Loading rules from directory /etc/polkit-1/rules.d Feb 14 01:09:24.062043 polkitd[1683]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 14 01:09:24.065493 polkitd[1683]: Finished loading, compiling and executing 2 rules Feb 14 01:09:24.068649 dbus-daemon[1584]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 14 01:09:24.069011 polkitd[1683]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 14 01:09:24.068911 systemd[1]: Started polkit.service - Authorization Manager. Feb 14 01:09:24.100970 systemd-hostnamed[1643]: Hostname set to (static) Feb 14 01:09:24.148692 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 14 01:09:24.169185 containerd[1629]: time="2025-02-14T01:09:24.165738839Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 14 01:09:24.222543 containerd[1629]: time="2025-02-14T01:09:24.220951731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:09:24.224973 containerd[1629]: time="2025-02-14T01:09:24.224630253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:09:24.224973 containerd[1629]: time="2025-02-14T01:09:24.224676831Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 14 01:09:24.224973 containerd[1629]: time="2025-02-14T01:09:24.224703890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 14 01:09:24.225158 containerd[1629]: time="2025-02-14T01:09:24.224971696Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 14 01:09:24.225158 containerd[1629]: time="2025-02-14T01:09:24.224999464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 14 01:09:24.225158 containerd[1629]: time="2025-02-14T01:09:24.225099415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:09:24.225158 containerd[1629]: time="2025-02-14T01:09:24.225122167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:09:24.226233 containerd[1629]: time="2025-02-14T01:09:24.225423674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:09:24.226287 containerd[1629]: time="2025-02-14T01:09:24.226235110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 14 01:09:24.226287 containerd[1629]: time="2025-02-14T01:09:24.226262831Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:09:24.226359 containerd[1629]: time="2025-02-14T01:09:24.226309933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 14 01:09:24.226594 containerd[1629]: time="2025-02-14T01:09:24.226488614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:09:24.226900 containerd[1629]: time="2025-02-14T01:09:24.226873105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:09:24.227658 containerd[1629]: time="2025-02-14T01:09:24.227116449Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:09:24.227658 containerd[1629]: time="2025-02-14T01:09:24.227162464Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 14 01:09:24.227658 containerd[1629]: time="2025-02-14T01:09:24.227291941Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 14 01:09:24.227658 containerd[1629]: time="2025-02-14T01:09:24.227369426Z" level=info msg="metadata content store policy set" policy=shared Feb 14 01:09:24.233373 containerd[1629]: time="2025-02-14T01:09:24.233339175Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 14 01:09:24.233506 containerd[1629]: time="2025-02-14T01:09:24.233427092Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 14 01:09:24.233555 containerd[1629]: time="2025-02-14T01:09:24.233517726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 14 01:09:24.233555 containerd[1629]: time="2025-02-14T01:09:24.233545950Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 14 01:09:24.233672 containerd[1629]: time="2025-02-14T01:09:24.233582214Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 14 01:09:24.234253 containerd[1629]: time="2025-02-14T01:09:24.233754997Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.234728291Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.234896849Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.234923007Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.234946305Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.234968277Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.234990135Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.235010464Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.235030918Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.235085971Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.235110095Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.235129378Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.235149280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.235177955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.235456 containerd[1629]: time="2025-02-14T01:09:24.235198289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.235937 containerd[1629]: time="2025-02-14T01:09:24.235218925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.235937 containerd[1629]: time="2025-02-14T01:09:24.235255986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.235937 containerd[1629]: time="2025-02-14T01:09:24.235279010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.235937 containerd[1629]: time="2025-02-14T01:09:24.235315964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.235937 containerd[1629]: time="2025-02-14T01:09:24.235337299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.235937 containerd[1629]: time="2025-02-14T01:09:24.235357388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.235937 containerd[1629]: time="2025-02-14T01:09:24.235377577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.235937 containerd[1629]: time="2025-02-14T01:09:24.235417236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236484343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236527929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236552209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236577404Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236617982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236641111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236658936Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236715888Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236748458Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236768152Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236787650Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236804212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236822789Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 14 01:09:24.240458 containerd[1629]: time="2025-02-14T01:09:24.236848891Z" level=info msg="NRI interface is disabled by configuration." Feb 14 01:09:24.240961 containerd[1629]: time="2025-02-14T01:09:24.236867921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 14 01:09:24.241032 containerd[1629]: time="2025-02-14T01:09:24.237244541Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 14 01:09:24.241032 containerd[1629]: time="2025-02-14T01:09:24.237336830Z" level=info msg="Connect containerd service" Feb 14 01:09:24.241032 containerd[1629]: time="2025-02-14T01:09:24.237399556Z" level=info msg="using legacy CRI server" Feb 14 01:09:24.241032 containerd[1629]: time="2025-02-14T01:09:24.237417025Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 14 01:09:24.241032 containerd[1629]: time="2025-02-14T01:09:24.237581581Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 14 01:09:24.241032 containerd[1629]: time="2025-02-14T01:09:24.240698102Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 14 01:09:24.241993 containerd[1629]: time="2025-02-14T01:09:24.241949299Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 14 01:09:24.242077 containerd[1629]: time="2025-02-14T01:09:24.242051076Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 14 01:09:24.242274 containerd[1629]: time="2025-02-14T01:09:24.242213032Z" level=info msg="Start subscribing containerd event" Feb 14 01:09:24.242344 containerd[1629]: time="2025-02-14T01:09:24.242280542Z" level=info msg="Start recovering state" Feb 14 01:09:24.242404 containerd[1629]: time="2025-02-14T01:09:24.242380657Z" level=info msg="Start event monitor" Feb 14 01:09:24.242485 containerd[1629]: time="2025-02-14T01:09:24.242424659Z" level=info msg="Start snapshots syncer" Feb 14 01:09:24.242485 containerd[1629]: time="2025-02-14T01:09:24.242466577Z" level=info msg="Start cni network conf syncer for default" Feb 14 01:09:24.242485 containerd[1629]: time="2025-02-14T01:09:24.242482352Z" level=info msg="Start streaming server" Feb 14 01:09:24.242772 systemd[1]: Started containerd.service - containerd container runtime. Feb 14 01:09:24.245633 containerd[1629]: time="2025-02-14T01:09:24.245502687Z" level=info msg="containerd successfully booted in 0.084620s" Feb 14 01:09:24.519273 sshd_keygen[1620]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 14 01:09:24.561095 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 14 01:09:24.574270 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 14 01:09:24.588976 systemd[1]: issuegen.service: Deactivated successfully. Feb 14 01:09:24.589407 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 14 01:09:24.603442 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 14 01:09:24.624573 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 14 01:09:24.638978 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 14 01:09:24.646517 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 14 01:09:24.649718 systemd[1]: Reached target getty.target - Login Prompts. Feb 14 01:09:24.700229 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 14 01:09:24.709966 systemd[1]: Started sshd@0-10.230.17.130:22-147.75.109.163:59432.service - OpenSSH per-connection server daemon (147.75.109.163:59432). Feb 14 01:09:24.769218 tar[1627]: linux-amd64/LICENSE Feb 14 01:09:24.770639 tar[1627]: linux-amd64/README.md Feb 14 01:09:24.792288 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 14 01:09:25.105675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:09:25.115148 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:09:25.609078 sshd[1723]: Accepted publickey for core from 147.75.109.163 port 59432 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:09:25.613945 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:09:25.631626 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 14 01:09:25.640911 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 14 01:09:25.646598 systemd-logind[1617]: New session 1 of user core. Feb 14 01:09:25.671676 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 14 01:09:25.683908 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 14 01:09:25.695492 (systemd)[1750]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 14 01:09:25.784346 kubelet[1738]: E0214 01:09:25.784274 1738 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:09:25.787346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:09:25.787757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:09:25.844681 systemd[1750]: Queued start job for default target default.target. Feb 14 01:09:25.845715 systemd[1750]: Created slice app.slice - User Application Slice. Feb 14 01:09:25.845849 systemd[1750]: Reached target paths.target - Paths. Feb 14 01:09:25.845962 systemd[1750]: Reached target timers.target - Timers. Feb 14 01:09:25.852632 systemd[1750]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 14 01:09:25.874865 systemd[1750]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 14 01:09:25.875113 systemd[1750]: Reached target sockets.target - Sockets. Feb 14 01:09:25.875288 systemd[1750]: Reached target basic.target - Basic System. Feb 14 01:09:25.875368 systemd[1750]: Reached target default.target - Main User Target. Feb 14 01:09:25.875422 systemd[1750]: Startup finished in 167ms. Feb 14 01:09:25.875881 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 14 01:09:25.881275 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 14 01:09:26.510354 systemd[1]: Started sshd@1-10.230.17.130:22-147.75.109.163:59448.service - OpenSSH per-connection server daemon (147.75.109.163:59448). Feb 14 01:09:27.390424 sshd[1765]: Accepted publickey for core from 147.75.109.163 port 59448 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:09:27.392767 sshd[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:09:27.401509 systemd-logind[1617]: New session 2 of user core. Feb 14 01:09:27.411279 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 14 01:09:28.007809 sshd[1765]: pam_unix(sshd:session): session closed for user core Feb 14 01:09:28.012651 systemd[1]: sshd@1-10.230.17.130:22-147.75.109.163:59448.service: Deactivated successfully. Feb 14 01:09:28.016054 systemd-logind[1617]: Session 2 logged out. Waiting for processes to exit. Feb 14 01:09:28.016635 systemd[1]: session-2.scope: Deactivated successfully. Feb 14 01:09:28.018945 systemd-logind[1617]: Removed session 2. Feb 14 01:09:28.162905 systemd[1]: Started sshd@2-10.230.17.130:22-147.75.109.163:59460.service - OpenSSH per-connection server daemon (147.75.109.163:59460). Feb 14 01:09:29.036347 sshd[1774]: Accepted publickey for core from 147.75.109.163 port 59460 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:09:29.039052 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:09:29.045302 systemd-logind[1617]: New session 3 of user core. Feb 14 01:09:29.051961 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 14 01:09:29.657051 sshd[1774]: pam_unix(sshd:session): session closed for user core Feb 14 01:09:29.661592 systemd-logind[1617]: Session 3 logged out. Waiting for processes to exit. Feb 14 01:09:29.662867 systemd[1]: sshd@2-10.230.17.130:22-147.75.109.163:59460.service: Deactivated successfully. Feb 14 01:09:29.673975 systemd[1]: session-3.scope: Deactivated successfully. Feb 14 01:09:29.679657 systemd-logind[1617]: Removed session 3. Feb 14 01:09:29.697928 login[1722]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 14 01:09:29.704241 login[1721]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 14 01:09:29.705988 systemd-logind[1617]: New session 4 of user core. Feb 14 01:09:29.715006 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 14 01:09:29.720390 systemd-logind[1617]: New session 5 of user core. Feb 14 01:09:29.722816 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 14 01:09:30.328148 coreos-metadata[1583]: Feb 14 01:09:30.327 WARN failed to locate config-drive, using the metadata service API instead Feb 14 01:09:30.354688 coreos-metadata[1583]: Feb 14 01:09:30.354 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 14 01:09:30.360714 coreos-metadata[1583]: Feb 14 01:09:30.360 INFO Fetch failed with 404: resource not found Feb 14 01:09:30.360714 coreos-metadata[1583]: Feb 14 01:09:30.360 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 14 01:09:30.361531 coreos-metadata[1583]: Feb 14 01:09:30.361 INFO Fetch successful Feb 14 01:09:30.361652 coreos-metadata[1583]: Feb 14 01:09:30.361 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 14 01:09:30.381011 coreos-metadata[1583]: Feb 14 01:09:30.380 INFO Fetch successful Feb 14 01:09:30.381224 coreos-metadata[1583]: Feb 14 01:09:30.381 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 14 01:09:30.396706 coreos-metadata[1583]: Feb 14 01:09:30.396 INFO Fetch successful Feb 14 01:09:30.397027 coreos-metadata[1583]: Feb 14 01:09:30.396 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 14 01:09:30.417670 coreos-metadata[1583]: Feb 14 01:09:30.417 INFO Fetch successful Feb 14 01:09:30.418107 coreos-metadata[1583]: Feb 14 01:09:30.418 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 14 01:09:30.441809 coreos-metadata[1583]: Feb 14 01:09:30.441 INFO Fetch successful Feb 14 01:09:30.479329 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 14 01:09:30.481353 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 14 01:09:31.166575 coreos-metadata[1682]: Feb 14 01:09:31.166 WARN failed to locate config-drive, using the metadata service API instead Feb 14 01:09:31.189745 coreos-metadata[1682]: Feb 14 01:09:31.189 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 14 01:09:31.229100 coreos-metadata[1682]: Feb 14 01:09:31.228 INFO Fetch successful Feb 14 01:09:31.229327 coreos-metadata[1682]: Feb 14 01:09:31.229 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 14 01:09:31.270667 coreos-metadata[1682]: Feb 14 01:09:31.270 INFO Fetch successful Feb 14 01:09:31.273180 unknown[1682]: wrote ssh authorized keys file for user: core Feb 14 01:09:31.302479 update-ssh-keys[1824]: Updated "/home/core/.ssh/authorized_keys" Feb 14 01:09:31.303918 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 14 01:09:31.310725 systemd[1]: Finished sshkeys.service. Feb 14 01:09:31.318606 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 14 01:09:31.318834 systemd[1]: Startup finished in 20.122s (kernel) + 12.931s (userspace) = 33.053s. Feb 14 01:09:35.941785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 14 01:09:35.950713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:09:36.165810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:09:36.180384 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:09:36.266774 kubelet[1842]: E0214 01:09:36.266420 1842 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:09:36.271937 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:09:36.272306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:09:39.783495 systemd[1]: Started sshd@3-10.230.17.130:22-147.75.109.163:40256.service - OpenSSH per-connection server daemon (147.75.109.163:40256). Feb 14 01:09:40.670663 sshd[1852]: Accepted publickey for core from 147.75.109.163 port 40256 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:09:40.673113 sshd[1852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:09:40.682723 systemd-logind[1617]: New session 6 of user core. Feb 14 01:09:40.694179 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 14 01:09:41.291921 sshd[1852]: pam_unix(sshd:session): session closed for user core Feb 14 01:09:41.299639 systemd[1]: sshd@3-10.230.17.130:22-147.75.109.163:40256.service: Deactivated successfully. Feb 14 01:09:41.300737 systemd-logind[1617]: Session 6 logged out. Waiting for processes to exit. Feb 14 01:09:41.304432 systemd[1]: session-6.scope: Deactivated successfully. Feb 14 01:09:41.308292 systemd-logind[1617]: Removed session 6. Feb 14 01:09:41.441852 systemd[1]: Started sshd@4-10.230.17.130:22-147.75.109.163:40268.service - OpenSSH per-connection server daemon (147.75.109.163:40268). Feb 14 01:09:42.339772 sshd[1860]: Accepted publickey for core from 147.75.109.163 port 40268 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:09:42.343071 sshd[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:09:42.350608 systemd-logind[1617]: New session 7 of user core. Feb 14 01:09:42.353883 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 14 01:09:42.959850 sshd[1860]: pam_unix(sshd:session): session closed for user core Feb 14 01:09:42.964626 systemd[1]: sshd@4-10.230.17.130:22-147.75.109.163:40268.service: Deactivated successfully. Feb 14 01:09:42.970141 systemd-logind[1617]: Session 7 logged out. Waiting for processes to exit. Feb 14 01:09:42.970179 systemd[1]: session-7.scope: Deactivated successfully. Feb 14 01:09:42.973055 systemd-logind[1617]: Removed session 7. Feb 14 01:09:43.110998 systemd[1]: Started sshd@5-10.230.17.130:22-147.75.109.163:40274.service - OpenSSH per-connection server daemon (147.75.109.163:40274). Feb 14 01:09:44.003289 sshd[1868]: Accepted publickey for core from 147.75.109.163 port 40274 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:09:44.005682 sshd[1868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:09:44.014248 systemd-logind[1617]: New session 8 of user core. Feb 14 01:09:44.021075 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 14 01:09:44.627807 sshd[1868]: pam_unix(sshd:session): session closed for user core Feb 14 01:09:44.632771 systemd[1]: sshd@5-10.230.17.130:22-147.75.109.163:40274.service: Deactivated successfully. Feb 14 01:09:44.637630 systemd-logind[1617]: Session 8 logged out. Waiting for processes to exit. Feb 14 01:09:44.639216 systemd[1]: session-8.scope: Deactivated successfully. Feb 14 01:09:44.641439 systemd-logind[1617]: Removed session 8. Feb 14 01:09:44.780018 systemd[1]: Started sshd@6-10.230.17.130:22-147.75.109.163:40290.service - OpenSSH per-connection server daemon (147.75.109.163:40290). Feb 14 01:09:45.664290 sshd[1876]: Accepted publickey for core from 147.75.109.163 port 40290 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:09:45.666825 sshd[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:09:45.674529 systemd-logind[1617]: New session 9 of user core. Feb 14 01:09:45.686259 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 14 01:09:46.155076 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 14 01:09:46.156109 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:09:46.170815 sudo[1880]: pam_unix(sudo:session): session closed for user root Feb 14 01:09:46.314878 sshd[1876]: pam_unix(sshd:session): session closed for user core Feb 14 01:09:46.319325 systemd[1]: sshd@6-10.230.17.130:22-147.75.109.163:40290.service: Deactivated successfully. Feb 14 01:09:46.324391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 14 01:09:46.325712 systemd[1]: session-9.scope: Deactivated successfully. Feb 14 01:09:46.327085 systemd-logind[1617]: Session 9 logged out. Waiting for processes to exit. Feb 14 01:09:46.339759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:09:46.341949 systemd-logind[1617]: Removed session 9. Feb 14 01:09:46.468058 systemd[1]: Started sshd@7-10.230.17.130:22-147.75.109.163:40292.service - OpenSSH per-connection server daemon (147.75.109.163:40292). Feb 14 01:09:46.492645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:09:46.498543 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:09:46.583966 kubelet[1899]: E0214 01:09:46.583850 1899 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:09:46.586407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:09:46.586738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:09:47.350636 sshd[1893]: Accepted publickey for core from 147.75.109.163 port 40292 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:09:47.352914 sshd[1893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:09:47.359491 systemd-logind[1617]: New session 10 of user core. Feb 14 01:09:47.371127 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 14 01:09:47.828031 sudo[1911]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 14 01:09:47.828504 sudo[1911]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:09:47.835078 sudo[1911]: pam_unix(sudo:session): session closed for user root Feb 14 01:09:47.843596 sudo[1910]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 14 01:09:47.844090 sudo[1910]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:09:47.863887 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 14 01:09:47.867835 auditctl[1914]: No rules Feb 14 01:09:47.869745 systemd[1]: audit-rules.service: Deactivated successfully. Feb 14 01:09:47.870252 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 14 01:09:47.877972 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 01:09:47.915418 augenrules[1933]: No rules Feb 14 01:09:47.917427 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 01:09:47.919914 sudo[1910]: pam_unix(sudo:session): session closed for user root Feb 14 01:09:48.063987 sshd[1893]: pam_unix(sshd:session): session closed for user core Feb 14 01:09:48.068649 systemd[1]: sshd@7-10.230.17.130:22-147.75.109.163:40292.service: Deactivated successfully. Feb 14 01:09:48.072333 systemd-logind[1617]: Session 10 logged out. Waiting for processes to exit. Feb 14 01:09:48.072683 systemd[1]: session-10.scope: Deactivated successfully. Feb 14 01:09:48.074933 systemd-logind[1617]: Removed session 10. Feb 14 01:09:48.221002 systemd[1]: Started sshd@8-10.230.17.130:22-147.75.109.163:40304.service - OpenSSH per-connection server daemon (147.75.109.163:40304). Feb 14 01:09:49.100562 sshd[1942]: Accepted publickey for core from 147.75.109.163 port 40304 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:09:49.102835 sshd[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:09:49.111100 systemd-logind[1617]: New session 11 of user core. Feb 14 01:09:49.118046 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 14 01:09:49.576642 sudo[1946]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 14 01:09:49.577082 sudo[1946]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:09:50.065002 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 14 01:09:50.078542 (dockerd)[1962]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 14 01:09:50.533672 dockerd[1962]: time="2025-02-14T01:09:50.531421127Z" level=info msg="Starting up" Feb 14 01:09:50.809023 dockerd[1962]: time="2025-02-14T01:09:50.808856228Z" level=info msg="Loading containers: start." Feb 14 01:09:50.955512 kernel: Initializing XFRM netlink socket Feb 14 01:09:51.077101 systemd-networkd[1261]: docker0: Link UP Feb 14 01:09:51.099023 dockerd[1962]: time="2025-02-14T01:09:51.098952158Z" level=info msg="Loading containers: done." Feb 14 01:09:51.118569 dockerd[1962]: time="2025-02-14T01:09:51.118497057Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 14 01:09:51.118759 dockerd[1962]: time="2025-02-14T01:09:51.118662819Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 14 01:09:51.118890 dockerd[1962]: time="2025-02-14T01:09:51.118862973Z" level=info msg="Daemon has completed initialization" Feb 14 01:09:51.171435 dockerd[1962]: time="2025-02-14T01:09:51.170502995Z" level=info msg="API listen on /run/docker.sock" Feb 14 01:09:51.172675 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 14 01:09:52.489951 containerd[1629]: time="2025-02-14T01:09:52.489756424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 14 01:09:53.303343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251808707.mount: Deactivated successfully. Feb 14 01:09:54.140339 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 14 01:09:55.872965 containerd[1629]: time="2025-02-14T01:09:55.872049425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:09:55.878898 containerd[1629]: time="2025-02-14T01:09:55.878708462Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678222" Feb 14 01:09:55.879721 containerd[1629]: time="2025-02-14T01:09:55.879662940Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:09:55.885334 containerd[1629]: time="2025-02-14T01:09:55.885241740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:09:55.888477 containerd[1629]: time="2025-02-14T01:09:55.886799216Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 3.396903475s" Feb 14 01:09:55.888477 containerd[1629]: time="2025-02-14T01:09:55.886911668Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 14 01:09:55.924871 containerd[1629]: time="2025-02-14T01:09:55.924490059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 14 01:09:56.693146 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 14 01:09:56.711904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:09:56.929819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:09:56.947638 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:09:57.142766 kubelet[2188]: E0214 01:09:57.142640 2188 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:09:57.145139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:09:57.145652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:09:58.679233 containerd[1629]: time="2025-02-14T01:09:58.679100155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:09:58.680838 containerd[1629]: time="2025-02-14T01:09:58.680771875Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611553" Feb 14 01:09:58.681986 containerd[1629]: time="2025-02-14T01:09:58.681924052Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:09:58.688046 containerd[1629]: time="2025-02-14T01:09:58.687969044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:09:58.690680 containerd[1629]: time="2025-02-14T01:09:58.689626208Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 2.765077377s" Feb 14 01:09:58.690680 containerd[1629]: time="2025-02-14T01:09:58.689685168Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 14 01:09:58.727536 containerd[1629]: time="2025-02-14T01:09:58.727476280Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 14 01:10:00.617976 containerd[1629]: time="2025-02-14T01:10:00.617729969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:00.620120 containerd[1629]: time="2025-02-14T01:10:00.619931405Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782138" Feb 14 01:10:00.621146 containerd[1629]: time="2025-02-14T01:10:00.621080508Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:00.625154 containerd[1629]: time="2025-02-14T01:10:00.625021139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:00.627481 containerd[1629]: time="2025-02-14T01:10:00.626663088Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.89882644s" Feb 14 01:10:00.627481 containerd[1629]: time="2025-02-14T01:10:00.626725541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 14 01:10:00.664367 containerd[1629]: time="2025-02-14T01:10:00.663970141Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 14 01:10:02.432143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167629498.mount: Deactivated successfully. Feb 14 01:10:03.091550 containerd[1629]: time="2025-02-14T01:10:03.091409165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:03.092755 containerd[1629]: time="2025-02-14T01:10:03.092705197Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057866" Feb 14 01:10:03.095173 containerd[1629]: time="2025-02-14T01:10:03.093609169Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:03.096288 containerd[1629]: time="2025-02-14T01:10:03.096199755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:03.098102 containerd[1629]: time="2025-02-14T01:10:03.097599192Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.433567513s" Feb 14 01:10:03.098102 containerd[1629]: time="2025-02-14T01:10:03.097761881Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 14 01:10:03.146579 containerd[1629]: time="2025-02-14T01:10:03.146141226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 14 01:10:03.856879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3956096336.mount: Deactivated successfully. Feb 14 01:10:05.337360 containerd[1629]: time="2025-02-14T01:10:05.336017213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:05.338730 containerd[1629]: time="2025-02-14T01:10:05.338666388Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Feb 14 01:10:05.340834 containerd[1629]: time="2025-02-14T01:10:05.340763402Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:05.350783 containerd[1629]: time="2025-02-14T01:10:05.345708629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:05.350783 containerd[1629]: time="2025-02-14T01:10:05.347464997Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.201239592s" Feb 14 01:10:05.350783 containerd[1629]: time="2025-02-14T01:10:05.347505799Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 14 01:10:05.385547 containerd[1629]: time="2025-02-14T01:10:05.385428562Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 14 01:10:06.012906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235438877.mount: Deactivated successfully. Feb 14 01:10:06.017909 containerd[1629]: time="2025-02-14T01:10:06.017801100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:06.020326 containerd[1629]: time="2025-02-14T01:10:06.020261159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Feb 14 01:10:06.021781 containerd[1629]: time="2025-02-14T01:10:06.021666187Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:06.026672 containerd[1629]: time="2025-02-14T01:10:06.025361008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:06.027202 containerd[1629]: time="2025-02-14T01:10:06.026978571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 641.440582ms" Feb 14 01:10:06.027202 containerd[1629]: time="2025-02-14T01:10:06.027031270Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 14 01:10:06.069415 containerd[1629]: time="2025-02-14T01:10:06.068930123Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 14 01:10:06.734272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2101429640.mount: Deactivated successfully. Feb 14 01:10:07.193256 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 14 01:10:07.207096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:10:07.711870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:10:07.730921 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:10:07.879423 kubelet[2311]: E0214 01:10:07.878851 2311 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:10:07.881669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:10:07.882097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:10:09.282158 update_engine[1622]: I20250214 01:10:09.280961 1622 update_attempter.cc:509] Updating boot flags... Feb 14 01:10:09.422860 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2356) Feb 14 01:10:09.466626 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2357) Feb 14 01:10:10.303248 containerd[1629]: time="2025-02-14T01:10:10.303126867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:10.306851 containerd[1629]: time="2025-02-14T01:10:10.304686846Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Feb 14 01:10:10.306851 containerd[1629]: time="2025-02-14T01:10:10.306324809Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:10.312287 containerd[1629]: time="2025-02-14T01:10:10.312234540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:10.314756 containerd[1629]: time="2025-02-14T01:10:10.314708933Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.245712837s" Feb 14 01:10:10.314939 containerd[1629]: time="2025-02-14T01:10:10.314909806Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 14 01:10:15.988283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:10:16.003031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:10:16.049990 systemd[1]: Reloading requested from client PID 2429 ('systemctl') (unit session-11.scope)... Feb 14 01:10:16.050041 systemd[1]: Reloading... Feb 14 01:10:16.365497 zram_generator::config[2468]: No configuration found. Feb 14 01:10:16.459924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:10:16.561971 systemd[1]: Reloading finished in 511 ms. Feb 14 01:10:16.638805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:10:16.644241 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:10:16.645298 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 01:10:16.646030 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:10:16.653296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:10:16.854712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:10:16.876479 (kubelet)[2549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 01:10:16.954944 kubelet[2549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:10:16.955639 kubelet[2549]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 01:10:16.955750 kubelet[2549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:10:16.956505 kubelet[2549]: I0214 01:10:16.955927 2549 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 01:10:17.875389 kubelet[2549]: I0214 01:10:17.875303 2549 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 14 01:10:17.875389 kubelet[2549]: I0214 01:10:17.875362 2549 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 01:10:17.875753 kubelet[2549]: I0214 01:10:17.875720 2549 server.go:927] "Client rotation is on, will bootstrap in background" Feb 14 01:10:17.900808 kubelet[2549]: I0214 01:10:17.900507 2549 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 01:10:17.903488 kubelet[2549]: E0214 01:10:17.903199 2549 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.17.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:17.926548 kubelet[2549]: I0214 01:10:17.925602 2549 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 01:10:17.927895 kubelet[2549]: I0214 01:10:17.927831 2549 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 01:10:17.930390 kubelet[2549]: I0214 01:10:17.927893 2549 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-krhnz.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 14 01:10:17.931067 kubelet[2549]: I0214 01:10:17.931032 2549 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 01:10:17.931067 kubelet[2549]: I0214 01:10:17.931067 2549 container_manager_linux.go:301] "Creating device plugin manager" Feb 14 01:10:17.931334 kubelet[2549]: I0214 01:10:17.931311 2549 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:10:17.932423 kubelet[2549]: I0214 01:10:17.932364 2549 kubelet.go:400] "Attempting to sync node with API server" Feb 14 01:10:17.932534 kubelet[2549]: I0214 01:10:17.932470 2549 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 01:10:17.932591 kubelet[2549]: I0214 01:10:17.932534 2549 kubelet.go:312] "Adding apiserver pod source" Feb 14 01:10:17.936466 kubelet[2549]: I0214 01:10:17.934280 2549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 01:10:17.937728 kubelet[2549]: W0214 01:10:17.937652 2549 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.17.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-krhnz.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:17.937818 kubelet[2549]: E0214 01:10:17.937752 2549 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.17.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-krhnz.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:17.937897 kubelet[2549]: W0214 01:10:17.937858 2549 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.17.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:17.937975 kubelet[2549]: E0214 01:10:17.937910 2549 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.17.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:17.938899 kubelet[2549]: I0214 01:10:17.938860 2549 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 01:10:17.941246 kubelet[2549]: I0214 01:10:17.941177 2549 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 01:10:17.941344 kubelet[2549]: W0214 01:10:17.941307 2549 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 14 01:10:17.942959 kubelet[2549]: I0214 01:10:17.942534 2549 server.go:1264] "Started kubelet" Feb 14 01:10:17.948308 kubelet[2549]: I0214 01:10:17.948275 2549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 01:10:17.958723 kubelet[2549]: I0214 01:10:17.957126 2549 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 14 01:10:17.958723 kubelet[2549]: I0214 01:10:17.958227 2549 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 01:10:17.960970 kubelet[2549]: I0214 01:10:17.960946 2549 server.go:455] "Adding debug handlers to kubelet server" Feb 14 01:10:17.962539 kubelet[2549]: I0214 01:10:17.962477 2549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 01:10:17.962966 kubelet[2549]: I0214 01:10:17.962944 2549 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 01:10:17.966954 kubelet[2549]: I0214 01:10:17.966885 2549 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 14 01:10:17.967049 kubelet[2549]: I0214 01:10:17.967013 2549 reconciler.go:26] "Reconciler: start to sync state" Feb 14 01:10:17.977034 kubelet[2549]: I0214 01:10:17.976997 2549 factory.go:221] Registration of the systemd container factory successfully Feb 14 01:10:17.978175 kubelet[2549]: I0214 01:10:17.977301 2549 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 01:10:17.978175 kubelet[2549]: E0214 01:10:17.977955 2549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-krhnz.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.130:6443: connect: connection refused" interval="200ms" Feb 14 01:10:17.984050 kubelet[2549]: E0214 01:10:17.983689 2549 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.17.130:6443/api/v1/namespaces/default/events\": dial tcp 10.230.17.130:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-krhnz.gb1.brightbox.com.1823edddcae1cc85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-krhnz.gb1.brightbox.com,UID:srv-krhnz.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-krhnz.gb1.brightbox.com,},FirstTimestamp:2025-02-14 01:10:17.942494341 +0000 UTC m=+1.057799195,LastTimestamp:2025-02-14 01:10:17.942494341 +0000 UTC m=+1.057799195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-krhnz.gb1.brightbox.com,}" Feb 14 01:10:17.986545 kubelet[2549]: W0214 01:10:17.985282 2549 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.17.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:17.986545 kubelet[2549]: E0214 01:10:17.985345 2549 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.17.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:17.986545 kubelet[2549]: I0214 01:10:17.985730 2549 factory.go:221] Registration of the containerd container factory successfully Feb 14 01:10:17.996592 kubelet[2549]: I0214 01:10:17.996531 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 01:10:18.006103 kubelet[2549]: I0214 01:10:18.006069 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 01:10:18.006274 kubelet[2549]: I0214 01:10:18.006130 2549 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 01:10:18.006274 kubelet[2549]: I0214 01:10:18.006165 2549 kubelet.go:2337] "Starting kubelet main sync loop" Feb 14 01:10:18.006274 kubelet[2549]: E0214 01:10:18.006245 2549 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 01:10:18.009856 kubelet[2549]: W0214 01:10:18.009194 2549 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.17.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:18.009856 kubelet[2549]: E0214 01:10:18.009388 2549 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.17.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:18.036775 kubelet[2549]: I0214 01:10:18.036729 2549 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 14 01:10:18.037024 kubelet[2549]: I0214 01:10:18.037002 2549 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 14 01:10:18.037184 kubelet[2549]: I0214 01:10:18.037165 2549 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:10:18.039918 kubelet[2549]: I0214 01:10:18.039894 2549 policy_none.go:49] "None policy: Start" Feb 14 01:10:18.041326 kubelet[2549]: I0214 01:10:18.041211 2549 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 01:10:18.041326 kubelet[2549]: I0214 01:10:18.041264 2549 state_mem.go:35] "Initializing new in-memory state store" Feb 14 01:10:18.051478 kubelet[2549]: I0214 01:10:18.049640 2549 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 01:10:18.051478 kubelet[2549]: I0214 01:10:18.049952 2549 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 01:10:18.051478 kubelet[2549]: I0214 01:10:18.050156 2549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 01:10:18.054379 kubelet[2549]: E0214 01:10:18.054325 2549 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-krhnz.gb1.brightbox.com\" not found" Feb 14 01:10:18.060952 kubelet[2549]: I0214 01:10:18.060891 2549 kubelet_node_status.go:73] "Attempting to register node" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.061495 kubelet[2549]: E0214 01:10:18.061439 2549 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.17.130:6443/api/v1/nodes\": dial tcp 10.230.17.130:6443: connect: connection refused" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.107505 kubelet[2549]: I0214 01:10:18.107317 2549 topology_manager.go:215] "Topology Admit Handler" podUID="88c1f36a25acec7fc4dd05e5c96b91aa" podNamespace="kube-system" podName="kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.111131 kubelet[2549]: I0214 01:10:18.110841 2549 topology_manager.go:215] "Topology Admit Handler" podUID="a544416b8ab7c0e74a771633384ddb8d" podNamespace="kube-system" podName="kube-scheduler-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.115668 kubelet[2549]: I0214 01:10:18.114864 2549 topology_manager.go:215] "Topology Admit Handler" podUID="8eaf65945ee1ba0aa3234e7462b8b193" podNamespace="kube-system" podName="kube-apiserver-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.179347 kubelet[2549]: E0214 01:10:18.179085 2549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-krhnz.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.130:6443: connect: connection refused" interval="400ms" Feb 14 01:10:18.265219 kubelet[2549]: I0214 01:10:18.265129 2549 kubelet_node_status.go:73] "Attempting to register node" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.265721 kubelet[2549]: E0214 01:10:18.265655 2549 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.17.130:6443/api/v1/nodes\": dial tcp 10.230.17.130:6443: connect: connection refused" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.269744 kubelet[2549]: I0214 01:10:18.269279 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8eaf65945ee1ba0aa3234e7462b8b193-ca-certs\") pod \"kube-apiserver-srv-krhnz.gb1.brightbox.com\" (UID: \"8eaf65945ee1ba0aa3234e7462b8b193\") " pod="kube-system/kube-apiserver-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.269744 kubelet[2549]: I0214 01:10:18.269341 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8eaf65945ee1ba0aa3234e7462b8b193-usr-share-ca-certificates\") pod \"kube-apiserver-srv-krhnz.gb1.brightbox.com\" (UID: \"8eaf65945ee1ba0aa3234e7462b8b193\") " pod="kube-system/kube-apiserver-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.269744 kubelet[2549]: I0214 01:10:18.269393 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88c1f36a25acec7fc4dd05e5c96b91aa-ca-certs\") pod \"kube-controller-manager-srv-krhnz.gb1.brightbox.com\" (UID: \"88c1f36a25acec7fc4dd05e5c96b91aa\") " pod="kube-system/kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.269744 kubelet[2549]: I0214 01:10:18.269426 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/88c1f36a25acec7fc4dd05e5c96b91aa-flexvolume-dir\") pod \"kube-controller-manager-srv-krhnz.gb1.brightbox.com\" (UID: \"88c1f36a25acec7fc4dd05e5c96b91aa\") " pod="kube-system/kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.269744 kubelet[2549]: I0214 01:10:18.269500 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88c1f36a25acec7fc4dd05e5c96b91aa-k8s-certs\") pod \"kube-controller-manager-srv-krhnz.gb1.brightbox.com\" (UID: \"88c1f36a25acec7fc4dd05e5c96b91aa\") " pod="kube-system/kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.270074 kubelet[2549]: I0214 01:10:18.269538 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88c1f36a25acec7fc4dd05e5c96b91aa-kubeconfig\") pod \"kube-controller-manager-srv-krhnz.gb1.brightbox.com\" (UID: \"88c1f36a25acec7fc4dd05e5c96b91aa\") " pod="kube-system/kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.270074 kubelet[2549]: I0214 01:10:18.269568 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88c1f36a25acec7fc4dd05e5c96b91aa-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-krhnz.gb1.brightbox.com\" (UID: \"88c1f36a25acec7fc4dd05e5c96b91aa\") " pod="kube-system/kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.270074 kubelet[2549]: I0214 01:10:18.269599 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a544416b8ab7c0e74a771633384ddb8d-kubeconfig\") pod \"kube-scheduler-srv-krhnz.gb1.brightbox.com\" (UID: \"a544416b8ab7c0e74a771633384ddb8d\") " pod="kube-system/kube-scheduler-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.270074 kubelet[2549]: I0214 01:10:18.269628 2549 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8eaf65945ee1ba0aa3234e7462b8b193-k8s-certs\") pod \"kube-apiserver-srv-krhnz.gb1.brightbox.com\" (UID: \"8eaf65945ee1ba0aa3234e7462b8b193\") " pod="kube-system/kube-apiserver-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.419988 containerd[1629]: time="2025-02-14T01:10:18.419884302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-krhnz.gb1.brightbox.com,Uid:88c1f36a25acec7fc4dd05e5c96b91aa,Namespace:kube-system,Attempt:0,}" Feb 14 01:10:18.429133 containerd[1629]: time="2025-02-14T01:10:18.429088585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-krhnz.gb1.brightbox.com,Uid:8eaf65945ee1ba0aa3234e7462b8b193,Namespace:kube-system,Attempt:0,}" Feb 14 01:10:18.429988 containerd[1629]: time="2025-02-14T01:10:18.429101450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-krhnz.gb1.brightbox.com,Uid:a544416b8ab7c0e74a771633384ddb8d,Namespace:kube-system,Attempt:0,}" Feb 14 01:10:18.580788 kubelet[2549]: E0214 01:10:18.580704 2549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-krhnz.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.130:6443: connect: connection refused" interval="800ms" Feb 14 01:10:18.669123 kubelet[2549]: I0214 01:10:18.668737 2549 kubelet_node_status.go:73] "Attempting to register node" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:18.669490 kubelet[2549]: E0214 01:10:18.669439 2549 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.17.130:6443/api/v1/nodes\": dial tcp 10.230.17.130:6443: connect: connection refused" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:19.011379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886284008.mount: Deactivated successfully. Feb 14 01:10:19.019637 containerd[1629]: time="2025-02-14T01:10:19.019550358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:10:19.020925 containerd[1629]: time="2025-02-14T01:10:19.020878974Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:10:19.022805 containerd[1629]: time="2025-02-14T01:10:19.022581207Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 01:10:19.022805 containerd[1629]: time="2025-02-14T01:10:19.022638721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 14 01:10:19.024013 containerd[1629]: time="2025-02-14T01:10:19.023977891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:10:19.024435 containerd[1629]: time="2025-02-14T01:10:19.024395327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 01:10:19.025094 containerd[1629]: time="2025-02-14T01:10:19.025056232Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:10:19.030305 containerd[1629]: time="2025-02-14T01:10:19.030243101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:10:19.031947 containerd[1629]: time="2025-02-14T01:10:19.031536467Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 602.120331ms" Feb 14 01:10:19.034925 containerd[1629]: time="2025-02-14T01:10:19.033831214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 604.660511ms" Feb 14 01:10:19.034925 containerd[1629]: time="2025-02-14T01:10:19.034710517Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.632541ms" Feb 14 01:10:19.133586 kubelet[2549]: W0214 01:10:19.132749 2549 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.17.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-krhnz.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:19.133586 kubelet[2549]: E0214 01:10:19.132860 2549 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.17.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-krhnz.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:19.186890 kubelet[2549]: W0214 01:10:19.186691 2549 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.17.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:19.186890 kubelet[2549]: E0214 01:10:19.186803 2549 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.17.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:19.231709 kubelet[2549]: W0214 01:10:19.231607 2549 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.17.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:19.232392 kubelet[2549]: E0214 01:10:19.232146 2549 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.17.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:19.239340 containerd[1629]: time="2025-02-14T01:10:19.238787763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:10:19.240062 containerd[1629]: time="2025-02-14T01:10:19.239856272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:10:19.241663 containerd[1629]: time="2025-02-14T01:10:19.241309644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:10:19.241663 containerd[1629]: time="2025-02-14T01:10:19.241387649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:10:19.241663 containerd[1629]: time="2025-02-14T01:10:19.241412800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:19.243773 containerd[1629]: time="2025-02-14T01:10:19.243236699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:19.244441 containerd[1629]: time="2025-02-14T01:10:19.243502176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:19.246234 containerd[1629]: time="2025-02-14T01:10:19.245155636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:19.249313 containerd[1629]: time="2025-02-14T01:10:19.248792894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:10:19.249313 containerd[1629]: time="2025-02-14T01:10:19.248867366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:10:19.249313 containerd[1629]: time="2025-02-14T01:10:19.248919357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:19.249313 containerd[1629]: time="2025-02-14T01:10:19.249187839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:19.384743 kubelet[2549]: E0214 01:10:19.383091 2549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-krhnz.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.130:6443: connect: connection refused" interval="1.6s" Feb 14 01:10:19.415171 kubelet[2549]: W0214 01:10:19.415049 2549 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.17.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:19.416146 kubelet[2549]: E0214 01:10:19.415437 2549 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.17.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:19.458457 containerd[1629]: time="2025-02-14T01:10:19.458160036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-krhnz.gb1.brightbox.com,Uid:88c1f36a25acec7fc4dd05e5c96b91aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"f917f34c8ec455328b40713798dfbc5d346b0576306448b5dec3246ec7d307ee\"" Feb 14 01:10:19.469865 containerd[1629]: time="2025-02-14T01:10:19.469792686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-krhnz.gb1.brightbox.com,Uid:8eaf65945ee1ba0aa3234e7462b8b193,Namespace:kube-system,Attempt:0,} returns sandbox id \"995145755058633166cb6572b98c211fc89651b789a21546de6161dfdf37c7a4\"" Feb 14 01:10:19.479423 kubelet[2549]: I0214 01:10:19.479380 2549 kubelet_node_status.go:73] "Attempting to register node" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:19.480135 kubelet[2549]: E0214 01:10:19.479894 2549 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.17.130:6443/api/v1/nodes\": dial tcp 10.230.17.130:6443: connect: connection refused" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:19.484832 containerd[1629]: time="2025-02-14T01:10:19.484636101Z" level=info msg="CreateContainer within sandbox \"995145755058633166cb6572b98c211fc89651b789a21546de6161dfdf37c7a4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 14 01:10:19.486007 containerd[1629]: time="2025-02-14T01:10:19.485882142Z" level=info msg="CreateContainer within sandbox \"f917f34c8ec455328b40713798dfbc5d346b0576306448b5dec3246ec7d307ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 14 01:10:19.491811 containerd[1629]: time="2025-02-14T01:10:19.491620676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-krhnz.gb1.brightbox.com,Uid:a544416b8ab7c0e74a771633384ddb8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c61f40607fe1a2f069468661d431fa62c074de69defcca7ae0e276783fc24a2c\"" Feb 14 01:10:19.494855 containerd[1629]: time="2025-02-14T01:10:19.494711531Z" level=info msg="CreateContainer within sandbox \"c61f40607fe1a2f069468661d431fa62c074de69defcca7ae0e276783fc24a2c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 14 01:10:19.513593 containerd[1629]: time="2025-02-14T01:10:19.513518804Z" level=info msg="CreateContainer within sandbox \"995145755058633166cb6572b98c211fc89651b789a21546de6161dfdf37c7a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c0af463d9e5007aa2c2420195473788da9a9b406bc7c7f9f907da2b83842b37\"" Feb 14 01:10:19.514807 containerd[1629]: time="2025-02-14T01:10:19.514776394Z" level=info msg="StartContainer for \"5c0af463d9e5007aa2c2420195473788da9a9b406bc7c7f9f907da2b83842b37\"" Feb 14 01:10:19.515217 containerd[1629]: time="2025-02-14T01:10:19.514803196Z" level=info msg="CreateContainer within sandbox \"f917f34c8ec455328b40713798dfbc5d346b0576306448b5dec3246ec7d307ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cf461573405989dd4b7d047d65a5cc6f0a9482a65223f1b053e45cee75f9329c\"" Feb 14 01:10:19.516861 containerd[1629]: time="2025-02-14T01:10:19.516829288Z" level=info msg="StartContainer for \"cf461573405989dd4b7d047d65a5cc6f0a9482a65223f1b053e45cee75f9329c\"" Feb 14 01:10:19.520291 containerd[1629]: time="2025-02-14T01:10:19.520237192Z" level=info msg="CreateContainer within sandbox \"c61f40607fe1a2f069468661d431fa62c074de69defcca7ae0e276783fc24a2c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9c187f8b211aa2591ff09886e195740437a1a6a3b6e7afe778ccb2fd243a2ebd\"" Feb 14 01:10:19.520875 containerd[1629]: time="2025-02-14T01:10:19.520841904Z" level=info msg="StartContainer for \"9c187f8b211aa2591ff09886e195740437a1a6a3b6e7afe778ccb2fd243a2ebd\"" Feb 14 01:10:19.697387 containerd[1629]: time="2025-02-14T01:10:19.697332247Z" level=info msg="StartContainer for \"5c0af463d9e5007aa2c2420195473788da9a9b406bc7c7f9f907da2b83842b37\" returns successfully" Feb 14 01:10:19.734478 containerd[1629]: time="2025-02-14T01:10:19.733558317Z" level=info msg="StartContainer for \"cf461573405989dd4b7d047d65a5cc6f0a9482a65223f1b053e45cee75f9329c\" returns successfully" Feb 14 01:10:19.750141 containerd[1629]: time="2025-02-14T01:10:19.750063666Z" level=info msg="StartContainer for \"9c187f8b211aa2591ff09886e195740437a1a6a3b6e7afe778ccb2fd243a2ebd\" returns successfully" Feb 14 01:10:20.094375 kubelet[2549]: E0214 01:10:20.093830 2549 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.17.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.17.130:6443: connect: connection refused Feb 14 01:10:21.086731 kubelet[2549]: I0214 01:10:21.085934 2549 kubelet_node_status.go:73] "Attempting to register node" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:22.939238 kubelet[2549]: I0214 01:10:22.939188 2549 apiserver.go:52] "Watching apiserver" Feb 14 01:10:22.948501 kubelet[2549]: I0214 01:10:22.948467 2549 kubelet_node_status.go:76] "Successfully registered node" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:22.967811 kubelet[2549]: I0214 01:10:22.967773 2549 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 14 01:10:25.329840 systemd[1]: Reloading requested from client PID 2825 ('systemctl') (unit session-11.scope)... Feb 14 01:10:25.329898 systemd[1]: Reloading... Feb 14 01:10:25.459891 zram_generator::config[2864]: No configuration found. Feb 14 01:10:25.658172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:10:25.756140 kubelet[2549]: W0214 01:10:25.755439 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:10:25.787100 systemd[1]: Reloading finished in 456 ms. Feb 14 01:10:25.840514 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:10:25.845284 kubelet[2549]: I0214 01:10:25.840715 2549 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 01:10:25.865752 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 01:10:25.866542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:10:25.879627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:10:26.081675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:10:26.094353 (kubelet)[2938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 01:10:26.225905 kubelet[2938]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:10:26.227293 kubelet[2938]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 01:10:26.227293 kubelet[2938]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:10:26.227293 kubelet[2938]: I0214 01:10:26.226606 2938 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 01:10:26.234329 kubelet[2938]: I0214 01:10:26.234274 2938 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 14 01:10:26.234329 kubelet[2938]: I0214 01:10:26.234319 2938 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 01:10:26.234680 kubelet[2938]: I0214 01:10:26.234637 2938 server.go:927] "Client rotation is on, will bootstrap in background" Feb 14 01:10:26.236759 kubelet[2938]: I0214 01:10:26.236717 2938 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 01:10:26.239127 kubelet[2938]: I0214 01:10:26.238363 2938 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 01:10:26.252162 kubelet[2938]: I0214 01:10:26.252085 2938 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 01:10:26.252901 kubelet[2938]: I0214 01:10:26.252838 2938 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 01:10:26.253216 kubelet[2938]: I0214 01:10:26.252900 2938 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-krhnz.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 14 01:10:26.253488 kubelet[2938]: I0214 01:10:26.253254 2938 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 01:10:26.253488 kubelet[2938]: I0214 01:10:26.253277 2938 container_manager_linux.go:301] "Creating device plugin manager" Feb 14 01:10:26.253488 kubelet[2938]: I0214 01:10:26.253383 2938 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:10:26.253637 kubelet[2938]: I0214 01:10:26.253620 2938 kubelet.go:400] "Attempting to sync node with API server" Feb 14 01:10:26.254725 kubelet[2938]: I0214 01:10:26.254539 2938 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 01:10:26.254725 kubelet[2938]: I0214 01:10:26.254607 2938 kubelet.go:312] "Adding apiserver pod source" Feb 14 01:10:26.256509 kubelet[2938]: I0214 01:10:26.256473 2938 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 01:10:26.259980 kubelet[2938]: I0214 01:10:26.259954 2938 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 01:10:26.260326 kubelet[2938]: I0214 01:10:26.260303 2938 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 01:10:26.261466 kubelet[2938]: I0214 01:10:26.261016 2938 server.go:1264] "Started kubelet" Feb 14 01:10:26.269066 kubelet[2938]: I0214 01:10:26.268115 2938 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 01:10:26.285186 kubelet[2938]: I0214 01:10:26.285126 2938 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 01:10:26.289955 kubelet[2938]: I0214 01:10:26.289909 2938 server.go:455] "Adding debug handlers to kubelet server" Feb 14 01:10:26.292567 kubelet[2938]: I0214 01:10:26.292492 2938 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 01:10:26.293061 kubelet[2938]: I0214 01:10:26.292996 2938 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 01:10:26.298789 kubelet[2938]: I0214 01:10:26.298755 2938 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 14 01:10:26.303468 kubelet[2938]: I0214 01:10:26.299866 2938 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 14 01:10:26.303468 kubelet[2938]: I0214 01:10:26.300111 2938 reconciler.go:26] "Reconciler: start to sync state" Feb 14 01:10:26.308374 kubelet[2938]: I0214 01:10:26.308268 2938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 01:10:26.310513 kubelet[2938]: I0214 01:10:26.309978 2938 factory.go:221] Registration of the systemd container factory successfully Feb 14 01:10:26.310513 kubelet[2938]: I0214 01:10:26.310130 2938 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 01:10:26.314469 kubelet[2938]: I0214 01:10:26.314409 2938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 01:10:26.314612 kubelet[2938]: I0214 01:10:26.314513 2938 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 01:10:26.314612 kubelet[2938]: I0214 01:10:26.314550 2938 kubelet.go:2337] "Starting kubelet main sync loop" Feb 14 01:10:26.314702 kubelet[2938]: E0214 01:10:26.314624 2938 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 01:10:26.317232 kubelet[2938]: E0214 01:10:26.317163 2938 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 01:10:26.324231 kubelet[2938]: I0214 01:10:26.323372 2938 factory.go:221] Registration of the containerd container factory successfully Feb 14 01:10:26.415765 kubelet[2938]: I0214 01:10:26.410843 2938 kubelet_node_status.go:73] "Attempting to register node" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.415915 kubelet[2938]: E0214 01:10:26.415781 2938 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 14 01:10:26.424270 kubelet[2938]: I0214 01:10:26.424177 2938 kubelet_node_status.go:112] "Node was previously registered" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.424362 kubelet[2938]: I0214 01:10:26.424320 2938 kubelet_node_status.go:76] "Successfully registered node" node="srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.481806 kubelet[2938]: I0214 01:10:26.481770 2938 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 14 01:10:26.482499 kubelet[2938]: I0214 01:10:26.482064 2938 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 14 01:10:26.482499 kubelet[2938]: I0214 01:10:26.482110 2938 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:10:26.482499 kubelet[2938]: I0214 01:10:26.482359 2938 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 14 01:10:26.482499 kubelet[2938]: I0214 01:10:26.482382 2938 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 14 01:10:26.482499 kubelet[2938]: I0214 01:10:26.482428 2938 policy_none.go:49] "None policy: Start" Feb 14 01:10:26.484524 kubelet[2938]: I0214 01:10:26.484012 2938 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 01:10:26.484524 kubelet[2938]: I0214 01:10:26.484058 2938 state_mem.go:35] "Initializing new in-memory state store" Feb 14 01:10:26.484524 kubelet[2938]: I0214 01:10:26.484291 2938 state_mem.go:75] "Updated machine memory state" Feb 14 01:10:26.490277 kubelet[2938]: I0214 01:10:26.490251 2938 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 01:10:26.490698 kubelet[2938]: I0214 01:10:26.490653 2938 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 01:10:26.501872 kubelet[2938]: I0214 01:10:26.501838 2938 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 01:10:26.617059 kubelet[2938]: I0214 01:10:26.616992 2938 topology_manager.go:215] "Topology Admit Handler" podUID="a544416b8ab7c0e74a771633384ddb8d" podNamespace="kube-system" podName="kube-scheduler-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.621677 kubelet[2938]: I0214 01:10:26.620471 2938 topology_manager.go:215] "Topology Admit Handler" podUID="8eaf65945ee1ba0aa3234e7462b8b193" podNamespace="kube-system" podName="kube-apiserver-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.621677 kubelet[2938]: I0214 01:10:26.620564 2938 topology_manager.go:215] "Topology Admit Handler" podUID="88c1f36a25acec7fc4dd05e5c96b91aa" podNamespace="kube-system" podName="kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.629723 kubelet[2938]: W0214 01:10:26.629690 2938 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:10:26.632527 kubelet[2938]: W0214 01:10:26.632363 2938 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:10:26.633361 kubelet[2938]: W0214 01:10:26.632732 2938 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:10:26.633361 kubelet[2938]: E0214 01:10:26.632800 2938 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-srv-krhnz.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.706064 kubelet[2938]: I0214 01:10:26.705942 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8eaf65945ee1ba0aa3234e7462b8b193-usr-share-ca-certificates\") pod \"kube-apiserver-srv-krhnz.gb1.brightbox.com\" (UID: \"8eaf65945ee1ba0aa3234e7462b8b193\") " pod="kube-system/kube-apiserver-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.706606 kubelet[2938]: I0214 01:10:26.706015 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8eaf65945ee1ba0aa3234e7462b8b193-ca-certs\") pod \"kube-apiserver-srv-krhnz.gb1.brightbox.com\" (UID: \"8eaf65945ee1ba0aa3234e7462b8b193\") " pod="kube-system/kube-apiserver-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.706606 kubelet[2938]: I0214 01:10:26.706391 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8eaf65945ee1ba0aa3234e7462b8b193-k8s-certs\") pod \"kube-apiserver-srv-krhnz.gb1.brightbox.com\" (UID: \"8eaf65945ee1ba0aa3234e7462b8b193\") " pod="kube-system/kube-apiserver-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.706606 kubelet[2938]: I0214 01:10:26.706462 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88c1f36a25acec7fc4dd05e5c96b91aa-ca-certs\") pod \"kube-controller-manager-srv-krhnz.gb1.brightbox.com\" (UID: \"88c1f36a25acec7fc4dd05e5c96b91aa\") " pod="kube-system/kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.706606 kubelet[2938]: I0214 01:10:26.706517 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/88c1f36a25acec7fc4dd05e5c96b91aa-flexvolume-dir\") pod \"kube-controller-manager-srv-krhnz.gb1.brightbox.com\" (UID: \"88c1f36a25acec7fc4dd05e5c96b91aa\") " pod="kube-system/kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.706606 kubelet[2938]: I0214 01:10:26.706553 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88c1f36a25acec7fc4dd05e5c96b91aa-k8s-certs\") pod \"kube-controller-manager-srv-krhnz.gb1.brightbox.com\" (UID: \"88c1f36a25acec7fc4dd05e5c96b91aa\") " pod="kube-system/kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.707141 kubelet[2938]: I0214 01:10:26.706926 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88c1f36a25acec7fc4dd05e5c96b91aa-kubeconfig\") pod \"kube-controller-manager-srv-krhnz.gb1.brightbox.com\" (UID: \"88c1f36a25acec7fc4dd05e5c96b91aa\") " pod="kube-system/kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.707141 kubelet[2938]: I0214 01:10:26.707002 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88c1f36a25acec7fc4dd05e5c96b91aa-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-krhnz.gb1.brightbox.com\" (UID: \"88c1f36a25acec7fc4dd05e5c96b91aa\") " pod="kube-system/kube-controller-manager-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:26.707141 kubelet[2938]: I0214 01:10:26.707041 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a544416b8ab7c0e74a771633384ddb8d-kubeconfig\") pod \"kube-scheduler-srv-krhnz.gb1.brightbox.com\" (UID: \"a544416b8ab7c0e74a771633384ddb8d\") " pod="kube-system/kube-scheduler-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:27.257942 kubelet[2938]: I0214 01:10:27.257893 2938 apiserver.go:52] "Watching apiserver" Feb 14 01:10:27.301077 kubelet[2938]: I0214 01:10:27.300994 2938 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 14 01:10:27.415706 kubelet[2938]: W0214 01:10:27.415654 2938 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:10:27.415898 kubelet[2938]: E0214 01:10:27.415762 2938 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-srv-krhnz.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-krhnz.gb1.brightbox.com" Feb 14 01:10:27.531617 kubelet[2938]: I0214 01:10:27.530795 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-krhnz.gb1.brightbox.com" podStartSLOduration=2.530759556 podStartE2EDuration="2.530759556s" podCreationTimestamp="2025-02-14 01:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:10:27.484610113 +0000 UTC m=+1.352656631" watchObservedRunningTime="2025-02-14 01:10:27.530759556 +0000 UTC m=+1.398806078" Feb 14 01:10:27.562808 kubelet[2938]: I0214 01:10:27.561594 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-krhnz.gb1.brightbox.com" podStartSLOduration=1.56157009 podStartE2EDuration="1.56157009s" podCreationTimestamp="2025-02-14 01:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:10:27.533864331 +0000 UTC m=+1.401910859" watchObservedRunningTime="2025-02-14 01:10:27.56157009 +0000 UTC m=+1.429616606" Feb 14 01:10:30.756805 kubelet[2938]: I0214 01:10:30.756419 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-krhnz.gb1.brightbox.com" podStartSLOduration=4.756333655 podStartE2EDuration="4.756333655s" podCreationTimestamp="2025-02-14 01:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:10:27.563597146 +0000 UTC m=+1.431643685" watchObservedRunningTime="2025-02-14 01:10:30.756333655 +0000 UTC m=+4.624380176" Feb 14 01:10:32.085664 sudo[1946]: pam_unix(sudo:session): session closed for user root Feb 14 01:10:32.231275 sshd[1942]: pam_unix(sshd:session): session closed for user core Feb 14 01:10:32.239086 systemd[1]: sshd@8-10.230.17.130:22-147.75.109.163:40304.service: Deactivated successfully. Feb 14 01:10:32.245527 systemd-logind[1617]: Session 11 logged out. Waiting for processes to exit. Feb 14 01:10:32.246575 systemd[1]: session-11.scope: Deactivated successfully. Feb 14 01:10:32.249062 systemd-logind[1617]: Removed session 11. Feb 14 01:10:40.257211 kubelet[2938]: I0214 01:10:40.257092 2938 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 14 01:10:40.262553 containerd[1629]: time="2025-02-14T01:10:40.261239863Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 14 01:10:40.265034 kubelet[2938]: I0214 01:10:40.264532 2938 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 14 01:10:41.138975 kubelet[2938]: I0214 01:10:41.138886 2938 topology_manager.go:215] "Topology Admit Handler" podUID="6e22e3f3-b83c-4d35-a9b2-d4b3148a1a23" podNamespace="kube-system" podName="kube-proxy-ndhs8" Feb 14 01:10:41.296812 kubelet[2938]: I0214 01:10:41.296725 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e22e3f3-b83c-4d35-a9b2-d4b3148a1a23-kube-proxy\") pod \"kube-proxy-ndhs8\" (UID: \"6e22e3f3-b83c-4d35-a9b2-d4b3148a1a23\") " pod="kube-system/kube-proxy-ndhs8" Feb 14 01:10:41.297933 kubelet[2938]: I0214 01:10:41.296817 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e22e3f3-b83c-4d35-a9b2-d4b3148a1a23-xtables-lock\") pod \"kube-proxy-ndhs8\" (UID: \"6e22e3f3-b83c-4d35-a9b2-d4b3148a1a23\") " pod="kube-system/kube-proxy-ndhs8" Feb 14 01:10:41.297933 kubelet[2938]: I0214 01:10:41.296879 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8q9r\" (UniqueName: \"kubernetes.io/projected/6e22e3f3-b83c-4d35-a9b2-d4b3148a1a23-kube-api-access-t8q9r\") pod \"kube-proxy-ndhs8\" (UID: \"6e22e3f3-b83c-4d35-a9b2-d4b3148a1a23\") " pod="kube-system/kube-proxy-ndhs8" Feb 14 01:10:41.297933 kubelet[2938]: I0214 01:10:41.296915 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e22e3f3-b83c-4d35-a9b2-d4b3148a1a23-lib-modules\") pod \"kube-proxy-ndhs8\" (UID: \"6e22e3f3-b83c-4d35-a9b2-d4b3148a1a23\") " pod="kube-system/kube-proxy-ndhs8" Feb 14 01:10:41.455489 containerd[1629]: time="2025-02-14T01:10:41.455402626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ndhs8,Uid:6e22e3f3-b83c-4d35-a9b2-d4b3148a1a23,Namespace:kube-system,Attempt:0,}" Feb 14 01:10:41.479266 kubelet[2938]: I0214 01:10:41.479202 2938 topology_manager.go:215] "Topology Admit Handler" podUID="026b65b2-4324-44d1-9b1d-4f64b540841a" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-stlnp" Feb 14 01:10:41.504810 kubelet[2938]: I0214 01:10:41.504188 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgbmf\" (UniqueName: \"kubernetes.io/projected/026b65b2-4324-44d1-9b1d-4f64b540841a-kube-api-access-qgbmf\") pod \"tigera-operator-7bc55997bb-stlnp\" (UID: \"026b65b2-4324-44d1-9b1d-4f64b540841a\") " pod="tigera-operator/tigera-operator-7bc55997bb-stlnp" Feb 14 01:10:41.506746 kubelet[2938]: I0214 01:10:41.506499 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/026b65b2-4324-44d1-9b1d-4f64b540841a-var-lib-calico\") pod \"tigera-operator-7bc55997bb-stlnp\" (UID: \"026b65b2-4324-44d1-9b1d-4f64b540841a\") " pod="tigera-operator/tigera-operator-7bc55997bb-stlnp" Feb 14 01:10:41.564739 containerd[1629]: time="2025-02-14T01:10:41.564124019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:10:41.564739 containerd[1629]: time="2025-02-14T01:10:41.564620875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:10:41.564739 containerd[1629]: time="2025-02-14T01:10:41.564673366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:41.565755 containerd[1629]: time="2025-02-14T01:10:41.565530312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:41.602820 systemd[1]: run-containerd-runc-k8s.io-a1c37e781e2ed045c7687af5124e9bef61a5e4bf882a314af9b6929d16c375bf-runc.fYJWnx.mount: Deactivated successfully. Feb 14 01:10:41.660260 containerd[1629]: time="2025-02-14T01:10:41.659839462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ndhs8,Uid:6e22e3f3-b83c-4d35-a9b2-d4b3148a1a23,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1c37e781e2ed045c7687af5124e9bef61a5e4bf882a314af9b6929d16c375bf\"" Feb 14 01:10:41.668870 containerd[1629]: time="2025-02-14T01:10:41.668148603Z" level=info msg="CreateContainer within sandbox \"a1c37e781e2ed045c7687af5124e9bef61a5e4bf882a314af9b6929d16c375bf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 14 01:10:41.687183 containerd[1629]: time="2025-02-14T01:10:41.687088618Z" level=info msg="CreateContainer within sandbox \"a1c37e781e2ed045c7687af5124e9bef61a5e4bf882a314af9b6929d16c375bf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"218da6321435514d2b20f40d28f5a2d4f761d4c79ad7efc4a80ace098018bae7\"" Feb 14 01:10:41.689507 containerd[1629]: time="2025-02-14T01:10:41.689076691Z" level=info msg="StartContainer for \"218da6321435514d2b20f40d28f5a2d4f761d4c79ad7efc4a80ace098018bae7\"" Feb 14 01:10:41.796636 containerd[1629]: time="2025-02-14T01:10:41.796352071Z" level=info msg="StartContainer for \"218da6321435514d2b20f40d28f5a2d4f761d4c79ad7efc4a80ace098018bae7\" returns successfully" Feb 14 01:10:41.820876 containerd[1629]: time="2025-02-14T01:10:41.820727632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-stlnp,Uid:026b65b2-4324-44d1-9b1d-4f64b540841a,Namespace:tigera-operator,Attempt:0,}" Feb 14 01:10:41.859737 containerd[1629]: time="2025-02-14T01:10:41.859053601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:10:41.859737 containerd[1629]: time="2025-02-14T01:10:41.859484803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:10:41.859737 containerd[1629]: time="2025-02-14T01:10:41.859564394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:41.860747 containerd[1629]: time="2025-02-14T01:10:41.859809299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:41.982877 containerd[1629]: time="2025-02-14T01:10:41.982807519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-stlnp,Uid:026b65b2-4324-44d1-9b1d-4f64b540841a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"541f4fe7220d132523ffa836bc7a1b14c282bc5a46a5dd5e8e83ec00a65832d9\"" Feb 14 01:10:41.988051 containerd[1629]: time="2025-02-14T01:10:41.986656258Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 14 01:10:42.435504 kubelet[2938]: I0214 01:10:42.434119 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ndhs8" podStartSLOduration=1.434070243 podStartE2EDuration="1.434070243s" podCreationTimestamp="2025-02-14 01:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:10:42.433968456 +0000 UTC m=+16.302014991" watchObservedRunningTime="2025-02-14 01:10:42.434070243 +0000 UTC m=+16.302116778" Feb 14 01:10:44.058649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797382122.mount: Deactivated successfully. Feb 14 01:10:45.270634 containerd[1629]: time="2025-02-14T01:10:45.270501140Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:45.272637 containerd[1629]: time="2025-02-14T01:10:45.272555027Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 14 01:10:45.273488 containerd[1629]: time="2025-02-14T01:10:45.272768473Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:45.277628 containerd[1629]: time="2025-02-14T01:10:45.276995806Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:45.278618 containerd[1629]: time="2025-02-14T01:10:45.278580525Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.291869465s" Feb 14 01:10:45.278806 containerd[1629]: time="2025-02-14T01:10:45.278776106Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 14 01:10:45.291148 containerd[1629]: time="2025-02-14T01:10:45.291110597Z" level=info msg="CreateContainer within sandbox \"541f4fe7220d132523ffa836bc7a1b14c282bc5a46a5dd5e8e83ec00a65832d9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 14 01:10:45.325356 containerd[1629]: time="2025-02-14T01:10:45.325175039Z" level=info msg="CreateContainer within sandbox \"541f4fe7220d132523ffa836bc7a1b14c282bc5a46a5dd5e8e83ec00a65832d9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1a6f66719ad2d369640829b1b7728f40459a791ad84bb510fb1c951e07e08fbc\"" Feb 14 01:10:45.326052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1203253763.mount: Deactivated successfully. Feb 14 01:10:45.328215 containerd[1629]: time="2025-02-14T01:10:45.326722162Z" level=info msg="StartContainer for \"1a6f66719ad2d369640829b1b7728f40459a791ad84bb510fb1c951e07e08fbc\"" Feb 14 01:10:45.416956 containerd[1629]: time="2025-02-14T01:10:45.416784865Z" level=info msg="StartContainer for \"1a6f66719ad2d369640829b1b7728f40459a791ad84bb510fb1c951e07e08fbc\" returns successfully" Feb 14 01:10:46.338881 kubelet[2938]: I0214 01:10:46.338710 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-stlnp" podStartSLOduration=2.037408419 podStartE2EDuration="5.338660583s" podCreationTimestamp="2025-02-14 01:10:41 +0000 UTC" firstStartedPulling="2025-02-14 01:10:41.984782062 +0000 UTC m=+15.852828575" lastFinishedPulling="2025-02-14 01:10:45.286034218 +0000 UTC m=+19.154080739" observedRunningTime="2025-02-14 01:10:45.455137341 +0000 UTC m=+19.323183862" watchObservedRunningTime="2025-02-14 01:10:46.338660583 +0000 UTC m=+20.206707118" Feb 14 01:10:48.712550 kubelet[2938]: I0214 01:10:48.712471 2938 topology_manager.go:215] "Topology Admit Handler" podUID="1821f6ee-2bb4-4342-b070-23f3e14d282c" podNamespace="calico-system" podName="calico-typha-7cf6c94fb4-mgxn9" Feb 14 01:10:48.776710 kubelet[2938]: I0214 01:10:48.776660 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1821f6ee-2bb4-4342-b070-23f3e14d282c-tigera-ca-bundle\") pod \"calico-typha-7cf6c94fb4-mgxn9\" (UID: \"1821f6ee-2bb4-4342-b070-23f3e14d282c\") " pod="calico-system/calico-typha-7cf6c94fb4-mgxn9" Feb 14 01:10:48.777936 kubelet[2938]: I0214 01:10:48.777575 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1821f6ee-2bb4-4342-b070-23f3e14d282c-typha-certs\") pod \"calico-typha-7cf6c94fb4-mgxn9\" (UID: \"1821f6ee-2bb4-4342-b070-23f3e14d282c\") " pod="calico-system/calico-typha-7cf6c94fb4-mgxn9" Feb 14 01:10:48.777936 kubelet[2938]: I0214 01:10:48.777724 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf8hr\" (UniqueName: \"kubernetes.io/projected/1821f6ee-2bb4-4342-b070-23f3e14d282c-kube-api-access-xf8hr\") pod \"calico-typha-7cf6c94fb4-mgxn9\" (UID: \"1821f6ee-2bb4-4342-b070-23f3e14d282c\") " pod="calico-system/calico-typha-7cf6c94fb4-mgxn9" Feb 14 01:10:48.994598 kubelet[2938]: I0214 01:10:48.994342 2938 topology_manager.go:215] "Topology Admit Handler" podUID="2b748ae9-d5a4-45cf-b668-849d2794f614" podNamespace="calico-system" podName="calico-node-xlprx" Feb 14 01:10:49.078823 kubelet[2938]: I0214 01:10:49.078749 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-flexvol-driver-host\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.078823 kubelet[2938]: I0214 01:10:49.078835 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vttjc\" (UniqueName: \"kubernetes.io/projected/2b748ae9-d5a4-45cf-b668-849d2794f614-kube-api-access-vttjc\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.079076 kubelet[2938]: I0214 01:10:49.078869 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2b748ae9-d5a4-45cf-b668-849d2794f614-node-certs\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.079076 kubelet[2938]: I0214 01:10:49.078901 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-log-dir\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.079076 kubelet[2938]: I0214 01:10:49.078931 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-xtables-lock\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.079076 kubelet[2938]: I0214 01:10:49.078956 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-var-lib-calico\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.079076 kubelet[2938]: I0214 01:10:49.078982 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-net-dir\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.079440 kubelet[2938]: I0214 01:10:49.079007 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-var-run-calico\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.079440 kubelet[2938]: I0214 01:10:49.079037 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b748ae9-d5a4-45cf-b668-849d2794f614-tigera-ca-bundle\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.079440 kubelet[2938]: I0214 01:10:49.079063 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-lib-modules\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.079440 kubelet[2938]: I0214 01:10:49.079088 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-policysync\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.079440 kubelet[2938]: I0214 01:10:49.079116 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-bin-dir\") pod \"calico-node-xlprx\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " pod="calico-system/calico-node-xlprx" Feb 14 01:10:49.089467 kubelet[2938]: I0214 01:10:49.089400 2938 topology_manager.go:215] "Topology Admit Handler" podUID="f1c564fe-b330-4d68-9185-0946498c1f87" podNamespace="calico-system" podName="csi-node-driver-mpl8k" Feb 14 01:10:49.123473 kubelet[2938]: E0214 01:10:49.123359 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mpl8k" podUID="f1c564fe-b330-4d68-9185-0946498c1f87" Feb 14 01:10:49.125261 containerd[1629]: time="2025-02-14T01:10:49.125202345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf6c94fb4-mgxn9,Uid:1821f6ee-2bb4-4342-b070-23f3e14d282c,Namespace:calico-system,Attempt:0,}" Feb 14 01:10:49.185396 kubelet[2938]: I0214 01:10:49.179855 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnp6l\" (UniqueName: \"kubernetes.io/projected/f1c564fe-b330-4d68-9185-0946498c1f87-kube-api-access-qnp6l\") pod \"csi-node-driver-mpl8k\" (UID: \"f1c564fe-b330-4d68-9185-0946498c1f87\") " pod="calico-system/csi-node-driver-mpl8k" Feb 14 01:10:49.185396 kubelet[2938]: I0214 01:10:49.179975 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f1c564fe-b330-4d68-9185-0946498c1f87-registration-dir\") pod \"csi-node-driver-mpl8k\" (UID: \"f1c564fe-b330-4d68-9185-0946498c1f87\") " pod="calico-system/csi-node-driver-mpl8k" Feb 14 01:10:49.185396 kubelet[2938]: I0214 01:10:49.180009 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1c564fe-b330-4d68-9185-0946498c1f87-kubelet-dir\") pod \"csi-node-driver-mpl8k\" (UID: \"f1c564fe-b330-4d68-9185-0946498c1f87\") " pod="calico-system/csi-node-driver-mpl8k" Feb 14 01:10:49.185396 kubelet[2938]: I0214 01:10:49.180049 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f1c564fe-b330-4d68-9185-0946498c1f87-socket-dir\") pod \"csi-node-driver-mpl8k\" (UID: \"f1c564fe-b330-4d68-9185-0946498c1f87\") " pod="calico-system/csi-node-driver-mpl8k" Feb 14 01:10:49.185396 kubelet[2938]: I0214 01:10:49.180100 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f1c564fe-b330-4d68-9185-0946498c1f87-varrun\") pod \"csi-node-driver-mpl8k\" (UID: \"f1c564fe-b330-4d68-9185-0946498c1f87\") " pod="calico-system/csi-node-driver-mpl8k" Feb 14 01:10:49.213640 kubelet[2938]: E0214 01:10:49.212637 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.213640 kubelet[2938]: W0214 01:10:49.212690 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.213640 kubelet[2938]: E0214 01:10:49.212880 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.215726 kubelet[2938]: E0214 01:10:49.213921 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.215726 kubelet[2938]: W0214 01:10:49.213948 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.219088 kubelet[2938]: E0214 01:10:49.217553 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.219088 kubelet[2938]: W0214 01:10:49.217584 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.219088 kubelet[2938]: E0214 01:10:49.217603 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.219088 kubelet[2938]: E0214 01:10:49.217740 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.219088 kubelet[2938]: E0214 01:10:49.218540 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.219088 kubelet[2938]: W0214 01:10:49.218670 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.219088 kubelet[2938]: E0214 01:10:49.218704 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.222680 kubelet[2938]: E0214 01:10:49.220878 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.222680 kubelet[2938]: W0214 01:10:49.220905 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.222680 kubelet[2938]: E0214 01:10:49.220933 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.222680 kubelet[2938]: E0214 01:10:49.221346 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.222680 kubelet[2938]: W0214 01:10:49.221361 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.222680 kubelet[2938]: E0214 01:10:49.221377 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.223030 kubelet[2938]: E0214 01:10:49.222767 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.223030 kubelet[2938]: W0214 01:10:49.222788 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.223030 kubelet[2938]: E0214 01:10:49.222817 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.228676 kubelet[2938]: E0214 01:10:49.223671 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.228676 kubelet[2938]: W0214 01:10:49.223687 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.228676 kubelet[2938]: E0214 01:10:49.223702 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.240489 kubelet[2938]: E0214 01:10:49.239093 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.240489 kubelet[2938]: W0214 01:10:49.239121 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.240489 kubelet[2938]: E0214 01:10:49.239148 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.291329 kubelet[2938]: E0214 01:10:49.291218 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.292067 kubelet[2938]: W0214 01:10:49.291771 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.292067 kubelet[2938]: E0214 01:10:49.291852 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.298558 kubelet[2938]: E0214 01:10:49.298532 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.300472 kubelet[2938]: W0214 01:10:49.298686 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.300472 kubelet[2938]: E0214 01:10:49.298717 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.301529 kubelet[2938]: E0214 01:10:49.301508 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.301529 kubelet[2938]: W0214 01:10:49.301651 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.303471 kubelet[2938]: E0214 01:10:49.301698 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.303471 kubelet[2938]: E0214 01:10:49.302030 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.303471 kubelet[2938]: W0214 01:10:49.302616 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.303471 kubelet[2938]: E0214 01:10:49.302638 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.303471 kubelet[2938]: E0214 01:10:49.302922 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.303471 kubelet[2938]: W0214 01:10:49.302937 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.303471 kubelet[2938]: E0214 01:10:49.302952 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.303471 kubelet[2938]: E0214 01:10:49.303176 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.303471 kubelet[2938]: W0214 01:10:49.303200 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.303471 kubelet[2938]: E0214 01:10:49.303216 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.314259 kubelet[2938]: E0214 01:10:49.310209 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.314259 kubelet[2938]: W0214 01:10:49.310245 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.314259 kubelet[2938]: E0214 01:10:49.310522 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.314259 kubelet[2938]: W0214 01:10:49.310537 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.314259 kubelet[2938]: E0214 01:10:49.311524 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.314259 kubelet[2938]: E0214 01:10:49.311891 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.314259 kubelet[2938]: W0214 01:10:49.311918 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.314259 kubelet[2938]: E0214 01:10:49.311934 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.314259 kubelet[2938]: E0214 01:10:49.312530 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.314259 kubelet[2938]: W0214 01:10:49.314181 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.321725 kubelet[2938]: E0214 01:10:49.314208 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.361474 kubelet[2938]: E0214 01:10:49.311506 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.361474 kubelet[2938]: E0214 01:10:49.359590 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.361474 kubelet[2938]: W0214 01:10:49.359616 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.361474 kubelet[2938]: E0214 01:10:49.359644 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.363809 kubelet[2938]: E0214 01:10:49.361878 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.363809 kubelet[2938]: W0214 01:10:49.361898 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.363809 kubelet[2938]: E0214 01:10:49.361915 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.365545 kubelet[2938]: E0214 01:10:49.365522 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.365672 kubelet[2938]: W0214 01:10:49.365650 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.365789 kubelet[2938]: E0214 01:10:49.365767 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.367879 kubelet[2938]: E0214 01:10:49.367510 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.369537 containerd[1629]: time="2025-02-14T01:10:49.367759644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:10:49.369537 containerd[1629]: time="2025-02-14T01:10:49.367896235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:10:49.369537 containerd[1629]: time="2025-02-14T01:10:49.367922152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:49.369537 containerd[1629]: time="2025-02-14T01:10:49.368140468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:49.371661 kubelet[2938]: W0214 01:10:49.369772 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.371661 kubelet[2938]: E0214 01:10:49.369814 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.376790 containerd[1629]: time="2025-02-14T01:10:49.372303615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xlprx,Uid:2b748ae9-d5a4-45cf-b668-849d2794f614,Namespace:calico-system,Attempt:0,}" Feb 14 01:10:49.381062 kubelet[2938]: E0214 01:10:49.378536 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.381062 kubelet[2938]: W0214 01:10:49.378555 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.381062 kubelet[2938]: E0214 01:10:49.378575 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.387063 kubelet[2938]: E0214 01:10:49.386974 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.388561 kubelet[2938]: W0214 01:10:49.387251 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.388561 kubelet[2938]: E0214 01:10:49.387305 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.392466 kubelet[2938]: E0214 01:10:49.391914 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.394467 kubelet[2938]: W0214 01:10:49.393480 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.395164 kubelet[2938]: E0214 01:10:49.394827 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.396475 kubelet[2938]: E0214 01:10:49.395523 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.396475 kubelet[2938]: W0214 01:10:49.395542 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.399635 kubelet[2938]: E0214 01:10:49.399484 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.400589 kubelet[2938]: E0214 01:10:49.400544 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.400589 kubelet[2938]: W0214 01:10:49.400564 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.402120 kubelet[2938]: E0214 01:10:49.401556 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.403468 kubelet[2938]: E0214 01:10:49.402548 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.403468 kubelet[2938]: W0214 01:10:49.402579 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.406576 kubelet[2938]: E0214 01:10:49.403719 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.406771 kubelet[2938]: E0214 01:10:49.406750 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.406983 kubelet[2938]: W0214 01:10:49.406959 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.411633 kubelet[2938]: E0214 01:10:49.411598 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.413477 kubelet[2938]: E0214 01:10:49.412783 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.413641 kubelet[2938]: W0214 01:10:49.413617 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.414742 kubelet[2938]: E0214 01:10:49.414720 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.416998 kubelet[2938]: E0214 01:10:49.416512 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.416998 kubelet[2938]: W0214 01:10:49.416533 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.416998 kubelet[2938]: E0214 01:10:49.416563 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.420534 kubelet[2938]: E0214 01:10:49.418529 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.420534 kubelet[2938]: W0214 01:10:49.418751 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.420534 kubelet[2938]: E0214 01:10:49.418773 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.429142 kubelet[2938]: E0214 01:10:49.428917 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.429142 kubelet[2938]: W0214 01:10:49.428941 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.429142 kubelet[2938]: E0214 01:10:49.428960 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.430812 kubelet[2938]: E0214 01:10:49.430647 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.430812 kubelet[2938]: W0214 01:10:49.430667 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.430812 kubelet[2938]: E0214 01:10:49.430684 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.432714 kubelet[2938]: E0214 01:10:49.432615 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.432714 kubelet[2938]: W0214 01:10:49.432636 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.432714 kubelet[2938]: E0214 01:10:49.432652 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.516774 kubelet[2938]: E0214 01:10:49.516737 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:49.518473 kubelet[2938]: W0214 01:10:49.516766 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:49.518473 kubelet[2938]: E0214 01:10:49.517207 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:49.543835 containerd[1629]: time="2025-02-14T01:10:49.542936643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:10:49.543835 containerd[1629]: time="2025-02-14T01:10:49.543248401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:10:49.543835 containerd[1629]: time="2025-02-14T01:10:49.543271641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:49.543835 containerd[1629]: time="2025-02-14T01:10:49.543409856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:10:49.659323 containerd[1629]: time="2025-02-14T01:10:49.657748044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf6c94fb4-mgxn9,Uid:1821f6ee-2bb4-4342-b070-23f3e14d282c,Namespace:calico-system,Attempt:0,} returns sandbox id \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\"" Feb 14 01:10:49.666783 containerd[1629]: time="2025-02-14T01:10:49.666195040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 14 01:10:49.687592 containerd[1629]: time="2025-02-14T01:10:49.687536562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xlprx,Uid:2b748ae9-d5a4-45cf-b668-849d2794f614,Namespace:calico-system,Attempt:0,} returns sandbox id \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\"" Feb 14 01:10:51.319546 kubelet[2938]: E0214 01:10:51.319275 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mpl8k" podUID="f1c564fe-b330-4d68-9185-0946498c1f87" Feb 14 01:10:51.645538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2916526733.mount: Deactivated successfully. Feb 14 01:10:51.966501 systemd-journald[1165]: Under memory pressure, flushing caches. Feb 14 01:10:51.966406 systemd-resolved[1516]: Under memory pressure, flushing caches. Feb 14 01:10:51.966892 systemd-resolved[1516]: Flushed all caches. Feb 14 01:10:52.935779 containerd[1629]: time="2025-02-14T01:10:52.935696967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:52.939162 containerd[1629]: time="2025-02-14T01:10:52.939041353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 14 01:10:52.942135 containerd[1629]: time="2025-02-14T01:10:52.941785967Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:52.948305 containerd[1629]: time="2025-02-14T01:10:52.948243452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:52.949583 containerd[1629]: time="2025-02-14T01:10:52.949524628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.283272534s" Feb 14 01:10:52.949863 containerd[1629]: time="2025-02-14T01:10:52.949706987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 14 01:10:52.954482 containerd[1629]: time="2025-02-14T01:10:52.952866764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 14 01:10:52.994400 containerd[1629]: time="2025-02-14T01:10:52.994350162Z" level=info msg="CreateContainer within sandbox \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 14 01:10:53.030364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174067401.mount: Deactivated successfully. Feb 14 01:10:53.031553 containerd[1629]: time="2025-02-14T01:10:53.031423724Z" level=info msg="CreateContainer within sandbox \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\"" Feb 14 01:10:53.033226 containerd[1629]: time="2025-02-14T01:10:53.033016273Z" level=info msg="StartContainer for \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\"" Feb 14 01:10:53.158795 containerd[1629]: time="2025-02-14T01:10:53.158721292Z" level=info msg="StartContainer for \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\" returns successfully" Feb 14 01:10:53.318573 kubelet[2938]: E0214 01:10:53.318498 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mpl8k" podUID="f1c564fe-b330-4d68-9185-0946498c1f87" Feb 14 01:10:53.557671 kubelet[2938]: I0214 01:10:53.555312 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cf6c94fb4-mgxn9" podStartSLOduration=2.267554449 podStartE2EDuration="5.555277893s" podCreationTimestamp="2025-02-14 01:10:48 +0000 UTC" firstStartedPulling="2025-02-14 01:10:49.663834907 +0000 UTC m=+23.531881415" lastFinishedPulling="2025-02-14 01:10:52.951558345 +0000 UTC m=+26.819604859" observedRunningTime="2025-02-14 01:10:53.552812244 +0000 UTC m=+27.420858769" watchObservedRunningTime="2025-02-14 01:10:53.555277893 +0000 UTC m=+27.423324418" Feb 14 01:10:53.603055 kubelet[2938]: E0214 01:10:53.602894 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.603055 kubelet[2938]: W0214 01:10:53.602948 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.603055 kubelet[2938]: E0214 01:10:53.602996 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.603782 kubelet[2938]: E0214 01:10:53.603759 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.603782 kubelet[2938]: W0214 01:10:53.603781 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.603925 kubelet[2938]: E0214 01:10:53.603797 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.604072 kubelet[2938]: E0214 01:10:53.604052 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.604072 kubelet[2938]: W0214 01:10:53.604071 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.604177 kubelet[2938]: E0214 01:10:53.604088 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.604367 kubelet[2938]: E0214 01:10:53.604348 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.604475 kubelet[2938]: W0214 01:10:53.604368 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.604475 kubelet[2938]: E0214 01:10:53.604383 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.604681 kubelet[2938]: E0214 01:10:53.604662 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.604681 kubelet[2938]: W0214 01:10:53.604681 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.604808 kubelet[2938]: E0214 01:10:53.604696 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.604970 kubelet[2938]: E0214 01:10:53.604951 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.604970 kubelet[2938]: W0214 01:10:53.604969 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.605103 kubelet[2938]: E0214 01:10:53.604985 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.605234 kubelet[2938]: E0214 01:10:53.605215 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.605301 kubelet[2938]: W0214 01:10:53.605236 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.605301 kubelet[2938]: E0214 01:10:53.605252 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.605529 kubelet[2938]: E0214 01:10:53.605509 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.605529 kubelet[2938]: W0214 01:10:53.605528 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.605684 kubelet[2938]: E0214 01:10:53.605543 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.605825 kubelet[2938]: E0214 01:10:53.605803 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.605825 kubelet[2938]: W0214 01:10:53.605824 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.605958 kubelet[2938]: E0214 01:10:53.605839 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.606086 kubelet[2938]: E0214 01:10:53.606067 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.606199 kubelet[2938]: W0214 01:10:53.606087 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.606199 kubelet[2938]: E0214 01:10:53.606103 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.606345 kubelet[2938]: E0214 01:10:53.606325 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.606426 kubelet[2938]: W0214 01:10:53.606345 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.606426 kubelet[2938]: E0214 01:10:53.606360 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.606645 kubelet[2938]: E0214 01:10:53.606619 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.606645 kubelet[2938]: W0214 01:10:53.606639 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.606781 kubelet[2938]: E0214 01:10:53.606654 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.606933 kubelet[2938]: E0214 01:10:53.606914 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.606933 kubelet[2938]: W0214 01:10:53.606933 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.607054 kubelet[2938]: E0214 01:10:53.606948 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.607196 kubelet[2938]: E0214 01:10:53.607177 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.607196 kubelet[2938]: W0214 01:10:53.607196 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.607292 kubelet[2938]: E0214 01:10:53.607213 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.607482 kubelet[2938]: E0214 01:10:53.607463 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.607482 kubelet[2938]: W0214 01:10:53.607481 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.607603 kubelet[2938]: E0214 01:10:53.607497 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.666797 kubelet[2938]: E0214 01:10:53.666333 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.666797 kubelet[2938]: W0214 01:10:53.666560 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.666797 kubelet[2938]: E0214 01:10:53.666599 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.667963 kubelet[2938]: E0214 01:10:53.667920 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.667963 kubelet[2938]: W0214 01:10:53.667956 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.668106 kubelet[2938]: E0214 01:10:53.667984 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.668724 kubelet[2938]: E0214 01:10:53.668609 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.668789 kubelet[2938]: W0214 01:10:53.668756 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.668936 kubelet[2938]: E0214 01:10:53.668876 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.669275 kubelet[2938]: E0214 01:10:53.669251 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.669349 kubelet[2938]: W0214 01:10:53.669273 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.670122 kubelet[2938]: E0214 01:10:53.669675 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.670239 kubelet[2938]: E0214 01:10:53.670216 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.670239 kubelet[2938]: W0214 01:10:53.670231 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.670603 kubelet[2938]: E0214 01:10:53.670548 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.670870 kubelet[2938]: E0214 01:10:53.670834 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.670870 kubelet[2938]: W0214 01:10:53.670861 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.671137 kubelet[2938]: E0214 01:10:53.671029 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.671222 kubelet[2938]: E0214 01:10:53.671180 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.671222 kubelet[2938]: W0214 01:10:53.671194 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.671222 kubelet[2938]: E0214 01:10:53.671216 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.671639 kubelet[2938]: E0214 01:10:53.671608 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.671758 kubelet[2938]: W0214 01:10:53.671658 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.671758 kubelet[2938]: E0214 01:10:53.671692 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.672143 kubelet[2938]: E0214 01:10:53.672119 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.672143 kubelet[2938]: W0214 01:10:53.672141 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.672252 kubelet[2938]: E0214 01:10:53.672178 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.672533 kubelet[2938]: E0214 01:10:53.672511 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.672533 kubelet[2938]: W0214 01:10:53.672532 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.672701 kubelet[2938]: E0214 01:10:53.672675 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.673093 kubelet[2938]: E0214 01:10:53.672986 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.673093 kubelet[2938]: W0214 01:10:53.673058 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.673218 kubelet[2938]: E0214 01:10:53.673139 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.673536 kubelet[2938]: E0214 01:10:53.673513 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.673608 kubelet[2938]: W0214 01:10:53.673535 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.673679 kubelet[2938]: E0214 01:10:53.673657 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.674143 kubelet[2938]: E0214 01:10:53.674119 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.674143 kubelet[2938]: W0214 01:10:53.674142 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.674782 kubelet[2938]: E0214 01:10:53.674159 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.676122 kubelet[2938]: E0214 01:10:53.676088 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.676122 kubelet[2938]: W0214 01:10:53.676114 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.676284 kubelet[2938]: E0214 01:10:53.676139 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.676631 kubelet[2938]: E0214 01:10:53.676609 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.676631 kubelet[2938]: W0214 01:10:53.676630 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.676781 kubelet[2938]: E0214 01:10:53.676665 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.676980 kubelet[2938]: E0214 01:10:53.676959 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.676980 kubelet[2938]: W0214 01:10:53.676979 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.677104 kubelet[2938]: E0214 01:10:53.676995 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.677408 kubelet[2938]: E0214 01:10:53.677386 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.677408 kubelet[2938]: W0214 01:10:53.677409 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.677551 kubelet[2938]: E0214 01:10:53.677426 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:53.678581 kubelet[2938]: E0214 01:10:53.678441 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:53.678581 kubelet[2938]: W0214 01:10:53.678577 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:53.678762 kubelet[2938]: E0214 01:10:53.678606 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.523792 kubelet[2938]: I0214 01:10:54.523738 2938 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:10:54.618989 kubelet[2938]: E0214 01:10:54.618912 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.621536 kubelet[2938]: W0214 01:10:54.619326 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.621608 kubelet[2938]: E0214 01:10:54.621548 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.622596 kubelet[2938]: E0214 01:10:54.622542 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.622596 kubelet[2938]: W0214 01:10:54.622588 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.622770 kubelet[2938]: E0214 01:10:54.622622 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.623268 kubelet[2938]: E0214 01:10:54.623244 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.623888 kubelet[2938]: W0214 01:10:54.623317 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.623888 kubelet[2938]: E0214 01:10:54.623338 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.623888 kubelet[2938]: E0214 01:10:54.623656 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.623888 kubelet[2938]: W0214 01:10:54.623670 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.623888 kubelet[2938]: E0214 01:10:54.623686 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.624506 kubelet[2938]: E0214 01:10:54.624474 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.624506 kubelet[2938]: W0214 01:10:54.624504 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.624875 kubelet[2938]: E0214 01:10:54.624521 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.625371 kubelet[2938]: E0214 01:10:54.625347 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.625491 kubelet[2938]: W0214 01:10:54.625389 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.625491 kubelet[2938]: E0214 01:10:54.625409 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.626250 kubelet[2938]: E0214 01:10:54.625817 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.626250 kubelet[2938]: W0214 01:10:54.625836 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.626250 kubelet[2938]: E0214 01:10:54.625852 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.626250 kubelet[2938]: E0214 01:10:54.626214 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.626933 kubelet[2938]: W0214 01:10:54.626253 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.626933 kubelet[2938]: E0214 01:10:54.626276 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.626933 kubelet[2938]: E0214 01:10:54.626783 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.626933 kubelet[2938]: W0214 01:10:54.626883 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.627139 kubelet[2938]: E0214 01:10:54.626943 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.627918 kubelet[2938]: E0214 01:10:54.627526 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.627918 kubelet[2938]: W0214 01:10:54.627570 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.627918 kubelet[2938]: E0214 01:10:54.627679 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.629002 kubelet[2938]: E0214 01:10:54.628205 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.629002 kubelet[2938]: W0214 01:10:54.628227 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.629002 kubelet[2938]: E0214 01:10:54.628246 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.629002 kubelet[2938]: E0214 01:10:54.628638 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.629002 kubelet[2938]: W0214 01:10:54.628653 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.629002 kubelet[2938]: E0214 01:10:54.628693 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.629298 kubelet[2938]: E0214 01:10:54.629063 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.629298 kubelet[2938]: W0214 01:10:54.629078 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.629298 kubelet[2938]: E0214 01:10:54.629121 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.629879 kubelet[2938]: E0214 01:10:54.629519 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.629879 kubelet[2938]: W0214 01:10:54.629541 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.629879 kubelet[2938]: E0214 01:10:54.629557 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.630067 kubelet[2938]: E0214 01:10:54.629884 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.630067 kubelet[2938]: W0214 01:10:54.629899 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.630067 kubelet[2938]: E0214 01:10:54.629940 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.679091 kubelet[2938]: E0214 01:10:54.678824 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.679091 kubelet[2938]: W0214 01:10:54.678879 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.679091 kubelet[2938]: E0214 01:10:54.678917 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.679922 kubelet[2938]: E0214 01:10:54.679692 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.679922 kubelet[2938]: W0214 01:10:54.679720 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.679922 kubelet[2938]: E0214 01:10:54.679737 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.680271 kubelet[2938]: E0214 01:10:54.680252 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.680493 kubelet[2938]: W0214 01:10:54.680364 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.680493 kubelet[2938]: E0214 01:10:54.680402 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.681404 kubelet[2938]: E0214 01:10:54.681351 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.681533 kubelet[2938]: W0214 01:10:54.681405 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.681533 kubelet[2938]: E0214 01:10:54.681515 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.682761 kubelet[2938]: E0214 01:10:54.682704 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.682761 kubelet[2938]: W0214 01:10:54.682727 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.683170 kubelet[2938]: E0214 01:10:54.682875 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.683170 kubelet[2938]: E0214 01:10:54.683103 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.683170 kubelet[2938]: W0214 01:10:54.683120 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.683837 kubelet[2938]: E0214 01:10:54.683785 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.684177 kubelet[2938]: E0214 01:10:54.683838 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.684177 kubelet[2938]: W0214 01:10:54.683854 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.684177 kubelet[2938]: E0214 01:10:54.683896 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.686256 kubelet[2938]: E0214 01:10:54.685968 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.686256 kubelet[2938]: W0214 01:10:54.685989 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.686256 kubelet[2938]: E0214 01:10:54.686017 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.688671 kubelet[2938]: E0214 01:10:54.688305 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.688671 kubelet[2938]: W0214 01:10:54.688337 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.689438 kubelet[2938]: E0214 01:10:54.688451 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.689438 kubelet[2938]: E0214 01:10:54.688807 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.689438 kubelet[2938]: W0214 01:10:54.688823 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.689438 kubelet[2938]: E0214 01:10:54.689080 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.689438 kubelet[2938]: W0214 01:10:54.689107 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.689438 kubelet[2938]: E0214 01:10:54.689125 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.689438 kubelet[2938]: E0214 01:10:54.689182 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.689438 kubelet[2938]: E0214 01:10:54.689385 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.689438 kubelet[2938]: W0214 01:10:54.689411 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.690628 kubelet[2938]: E0214 01:10:54.689436 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.690628 kubelet[2938]: E0214 01:10:54.689767 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.690628 kubelet[2938]: W0214 01:10:54.689782 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.690628 kubelet[2938]: E0214 01:10:54.689797 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.691981 kubelet[2938]: E0214 01:10:54.691415 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.691981 kubelet[2938]: W0214 01:10:54.691438 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.691981 kubelet[2938]: E0214 01:10:54.691757 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.691981 kubelet[2938]: W0214 01:10:54.691772 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.691981 kubelet[2938]: E0214 01:10:54.691877 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.691981 kubelet[2938]: E0214 01:10:54.691929 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.692320 kubelet[2938]: E0214 01:10:54.692111 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.692320 kubelet[2938]: W0214 01:10:54.692126 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.692320 kubelet[2938]: E0214 01:10:54.692141 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.692442 kubelet[2938]: E0214 01:10:54.692407 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.692442 kubelet[2938]: W0214 01:10:54.692421 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.692442 kubelet[2938]: E0214 01:10:54.692460 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.693755 kubelet[2938]: E0214 01:10:54.693435 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:10:54.693755 kubelet[2938]: W0214 01:10:54.693496 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:10:54.693755 kubelet[2938]: E0214 01:10:54.693517 2938 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:10:54.853370 containerd[1629]: time="2025-02-14T01:10:54.850986223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:54.855490 containerd[1629]: time="2025-02-14T01:10:54.854377055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 14 01:10:54.855573 containerd[1629]: time="2025-02-14T01:10:54.855518710Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:54.868177 containerd[1629]: time="2025-02-14T01:10:54.868067782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:10:54.871244 containerd[1629]: time="2025-02-14T01:10:54.871159704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.918241109s" Feb 14 01:10:54.871244 containerd[1629]: time="2025-02-14T01:10:54.871240124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 14 01:10:54.878125 containerd[1629]: time="2025-02-14T01:10:54.875292375Z" level=info msg="CreateContainer within sandbox \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 14 01:10:54.900433 containerd[1629]: time="2025-02-14T01:10:54.900228883Z" level=info msg="CreateContainer within sandbox \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f\"" Feb 14 01:10:54.902550 containerd[1629]: time="2025-02-14T01:10:54.902517969Z" level=info msg="StartContainer for \"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f\"" Feb 14 01:10:55.025676 containerd[1629]: time="2025-02-14T01:10:55.025370087Z" level=info msg="StartContainer for \"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f\" returns successfully" Feb 14 01:10:55.103055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f-rootfs.mount: Deactivated successfully. Feb 14 01:10:55.307931 containerd[1629]: time="2025-02-14T01:10:55.279301710Z" level=info msg="shim disconnected" id=256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f namespace=k8s.io Feb 14 01:10:55.307931 containerd[1629]: time="2025-02-14T01:10:55.307922329Z" level=warning msg="cleaning up after shim disconnected" id=256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f namespace=k8s.io Feb 14 01:10:55.308300 containerd[1629]: time="2025-02-14T01:10:55.307950308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:10:55.315811 kubelet[2938]: E0214 01:10:55.315526 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mpl8k" podUID="f1c564fe-b330-4d68-9185-0946498c1f87" Feb 14 01:10:55.534750 containerd[1629]: time="2025-02-14T01:10:55.534171992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 14 01:10:57.316347 kubelet[2938]: E0214 01:10:57.316178 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mpl8k" podUID="f1c564fe-b330-4d68-9185-0946498c1f87" Feb 14 01:10:59.316701 kubelet[2938]: E0214 01:10:59.316569 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mpl8k" podUID="f1c564fe-b330-4d68-9185-0946498c1f87" Feb 14 01:11:01.317634 kubelet[2938]: E0214 01:11:01.315963 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mpl8k" podUID="f1c564fe-b330-4d68-9185-0946498c1f87" Feb 14 01:11:01.851044 containerd[1629]: time="2025-02-14T01:11:01.850945688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:01.853053 containerd[1629]: time="2025-02-14T01:11:01.852874778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 14 01:11:01.854051 containerd[1629]: time="2025-02-14T01:11:01.853986106Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:01.859574 containerd[1629]: time="2025-02-14T01:11:01.857429937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:01.859574 containerd[1629]: time="2025-02-14T01:11:01.858637521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.32439728s" Feb 14 01:11:01.859574 containerd[1629]: time="2025-02-14T01:11:01.858687775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 14 01:11:01.864699 containerd[1629]: time="2025-02-14T01:11:01.864575999Z" level=info msg="CreateContainer within sandbox \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 14 01:11:01.908492 containerd[1629]: time="2025-02-14T01:11:01.907883132Z" level=info msg="CreateContainer within sandbox \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e\"" Feb 14 01:11:01.907970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314608838.mount: Deactivated successfully. Feb 14 01:11:01.912154 containerd[1629]: time="2025-02-14T01:11:01.911886127Z" level=info msg="StartContainer for \"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e\"" Feb 14 01:11:02.010588 systemd[1]: run-containerd-runc-k8s.io-b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e-runc.1k6VZp.mount: Deactivated successfully. Feb 14 01:11:02.083821 containerd[1629]: time="2025-02-14T01:11:02.083600318Z" level=info msg="StartContainer for \"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e\" returns successfully" Feb 14 01:11:03.317442 kubelet[2938]: E0214 01:11:03.315362 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mpl8k" podUID="f1c564fe-b330-4d68-9185-0946498c1f87" Feb 14 01:11:03.330219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e-rootfs.mount: Deactivated successfully. Feb 14 01:11:03.338255 containerd[1629]: time="2025-02-14T01:11:03.338087582Z" level=info msg="shim disconnected" id=b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e namespace=k8s.io Feb 14 01:11:03.338920 containerd[1629]: time="2025-02-14T01:11:03.338260254Z" level=warning msg="cleaning up after shim disconnected" id=b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e namespace=k8s.io Feb 14 01:11:03.338920 containerd[1629]: time="2025-02-14T01:11:03.338303665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:11:03.351616 kubelet[2938]: I0214 01:11:03.351572 2938 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 14 01:11:03.409507 kubelet[2938]: I0214 01:11:03.406111 2938 topology_manager.go:215] "Topology Admit Handler" podUID="b1f23b66-09c4-4269-b990-1ebe27f4a53b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rjd7n" Feb 14 01:11:03.409507 kubelet[2938]: I0214 01:11:03.408001 2938 topology_manager.go:215] "Topology Admit Handler" podUID="9cb84537-ce23-40cb-9b1d-10b46a87ae08" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vgn94" Feb 14 01:11:03.415642 kubelet[2938]: I0214 01:11:03.415459 2938 topology_manager.go:215] "Topology Admit Handler" podUID="8293e9cd-d766-48c9-bffd-cc310990d080" podNamespace="calico-system" podName="calico-kube-controllers-6496b8966d-qmcxb" Feb 14 01:11:03.416592 kubelet[2938]: I0214 01:11:03.416477 2938 topology_manager.go:215] "Topology Admit Handler" podUID="7bc0044f-26fb-4412-b84f-e458618cf4b4" podNamespace="calico-apiserver" podName="calico-apiserver-7654bfc7bc-jvfgq" Feb 14 01:11:03.419514 kubelet[2938]: I0214 01:11:03.416682 2938 topology_manager.go:215] "Topology Admit Handler" podUID="2ec0bfed-3539-4181-a415-bf87b9386eac" podNamespace="calico-apiserver" podName="calico-apiserver-7654bfc7bc-qthrw" Feb 14 01:11:03.443220 kubelet[2938]: I0214 01:11:03.443109 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spzxs\" (UniqueName: \"kubernetes.io/projected/7bc0044f-26fb-4412-b84f-e458618cf4b4-kube-api-access-spzxs\") pod \"calico-apiserver-7654bfc7bc-jvfgq\" (UID: \"7bc0044f-26fb-4412-b84f-e458618cf4b4\") " pod="calico-apiserver/calico-apiserver-7654bfc7bc-jvfgq" Feb 14 01:11:03.443220 kubelet[2938]: I0214 01:11:03.443178 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phv7m\" (UniqueName: \"kubernetes.io/projected/2ec0bfed-3539-4181-a415-bf87b9386eac-kube-api-access-phv7m\") pod \"calico-apiserver-7654bfc7bc-qthrw\" (UID: \"2ec0bfed-3539-4181-a415-bf87b9386eac\") " pod="calico-apiserver/calico-apiserver-7654bfc7bc-qthrw" Feb 14 01:11:03.443671 kubelet[2938]: I0214 01:11:03.443517 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrtt\" (UniqueName: \"kubernetes.io/projected/b1f23b66-09c4-4269-b990-1ebe27f4a53b-kube-api-access-lcrtt\") pod \"coredns-7db6d8ff4d-rjd7n\" (UID: \"b1f23b66-09c4-4269-b990-1ebe27f4a53b\") " pod="kube-system/coredns-7db6d8ff4d-rjd7n" Feb 14 01:11:03.443671 kubelet[2938]: I0214 01:11:03.443584 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl8s7\" (UniqueName: \"kubernetes.io/projected/8293e9cd-d766-48c9-bffd-cc310990d080-kube-api-access-zl8s7\") pod \"calico-kube-controllers-6496b8966d-qmcxb\" (UID: \"8293e9cd-d766-48c9-bffd-cc310990d080\") " pod="calico-system/calico-kube-controllers-6496b8966d-qmcxb" Feb 14 01:11:03.444304 kubelet[2938]: I0214 01:11:03.443831 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1f23b66-09c4-4269-b990-1ebe27f4a53b-config-volume\") pod \"coredns-7db6d8ff4d-rjd7n\" (UID: \"b1f23b66-09c4-4269-b990-1ebe27f4a53b\") " pod="kube-system/coredns-7db6d8ff4d-rjd7n" Feb 14 01:11:03.444304 kubelet[2938]: I0214 01:11:03.443995 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7bc0044f-26fb-4412-b84f-e458618cf4b4-calico-apiserver-certs\") pod \"calico-apiserver-7654bfc7bc-jvfgq\" (UID: \"7bc0044f-26fb-4412-b84f-e458618cf4b4\") " pod="calico-apiserver/calico-apiserver-7654bfc7bc-jvfgq" Feb 14 01:11:03.444304 kubelet[2938]: I0214 01:11:03.444041 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb685\" (UniqueName: \"kubernetes.io/projected/9cb84537-ce23-40cb-9b1d-10b46a87ae08-kube-api-access-zb685\") pod \"coredns-7db6d8ff4d-vgn94\" (UID: \"9cb84537-ce23-40cb-9b1d-10b46a87ae08\") " pod="kube-system/coredns-7db6d8ff4d-vgn94" Feb 14 01:11:03.444304 kubelet[2938]: I0214 01:11:03.444269 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ec0bfed-3539-4181-a415-bf87b9386eac-calico-apiserver-certs\") pod \"calico-apiserver-7654bfc7bc-qthrw\" (UID: \"2ec0bfed-3539-4181-a415-bf87b9386eac\") " pod="calico-apiserver/calico-apiserver-7654bfc7bc-qthrw" Feb 14 01:11:03.445282 kubelet[2938]: I0214 01:11:03.444532 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8293e9cd-d766-48c9-bffd-cc310990d080-tigera-ca-bundle\") pod \"calico-kube-controllers-6496b8966d-qmcxb\" (UID: \"8293e9cd-d766-48c9-bffd-cc310990d080\") " pod="calico-system/calico-kube-controllers-6496b8966d-qmcxb" Feb 14 01:11:03.445282 kubelet[2938]: I0214 01:11:03.444568 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cb84537-ce23-40cb-9b1d-10b46a87ae08-config-volume\") pod \"coredns-7db6d8ff4d-vgn94\" (UID: \"9cb84537-ce23-40cb-9b1d-10b46a87ae08\") " pod="kube-system/coredns-7db6d8ff4d-vgn94" Feb 14 01:11:03.614442 containerd[1629]: time="2025-02-14T01:11:03.611981126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 14 01:11:03.735506 containerd[1629]: time="2025-02-14T01:11:03.735300793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rjd7n,Uid:b1f23b66-09c4-4269-b990-1ebe27f4a53b,Namespace:kube-system,Attempt:0,}" Feb 14 01:11:03.738081 containerd[1629]: time="2025-02-14T01:11:03.737989062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vgn94,Uid:9cb84537-ce23-40cb-9b1d-10b46a87ae08,Namespace:kube-system,Attempt:0,}" Feb 14 01:11:03.739159 containerd[1629]: time="2025-02-14T01:11:03.739122255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496b8966d-qmcxb,Uid:8293e9cd-d766-48c9-bffd-cc310990d080,Namespace:calico-system,Attempt:0,}" Feb 14 01:11:03.753485 containerd[1629]: time="2025-02-14T01:11:03.753140342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7654bfc7bc-qthrw,Uid:2ec0bfed-3539-4181-a415-bf87b9386eac,Namespace:calico-apiserver,Attempt:0,}" Feb 14 01:11:03.757751 containerd[1629]: time="2025-02-14T01:11:03.757698306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7654bfc7bc-jvfgq,Uid:7bc0044f-26fb-4412-b84f-e458618cf4b4,Namespace:calico-apiserver,Attempt:0,}" Feb 14 01:11:04.121160 containerd[1629]: time="2025-02-14T01:11:04.121084617Z" level=error msg="Failed to destroy network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.125620 containerd[1629]: time="2025-02-14T01:11:04.125574208Z" level=error msg="Failed to destroy network for sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.129861 containerd[1629]: time="2025-02-14T01:11:04.129809254Z" level=error msg="encountered an error cleaning up failed sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.138967 containerd[1629]: time="2025-02-14T01:11:04.138868614Z" level=error msg="Failed to destroy network for sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.139615 containerd[1629]: time="2025-02-14T01:11:04.139578491Z" level=error msg="encountered an error cleaning up failed sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.139814 containerd[1629]: time="2025-02-14T01:11:04.139775835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496b8966d-qmcxb,Uid:8293e9cd-d766-48c9-bffd-cc310990d080,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.140542 containerd[1629]: time="2025-02-14T01:11:04.129806454Z" level=error msg="encountered an error cleaning up failed sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.140637 containerd[1629]: time="2025-02-14T01:11:04.140565122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rjd7n,Uid:b1f23b66-09c4-4269-b990-1ebe27f4a53b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.145988 containerd[1629]: time="2025-02-14T01:11:04.145716774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7654bfc7bc-jvfgq,Uid:7bc0044f-26fb-4412-b84f-e458618cf4b4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.145988 containerd[1629]: time="2025-02-14T01:11:04.145869128Z" level=error msg="Failed to destroy network for sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.146620 containerd[1629]: time="2025-02-14T01:11:04.146584422Z" level=error msg="encountered an error cleaning up failed sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.146781 containerd[1629]: time="2025-02-14T01:11:04.146743208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vgn94,Uid:9cb84537-ce23-40cb-9b1d-10b46a87ae08,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.148263 containerd[1629]: time="2025-02-14T01:11:04.146970247Z" level=error msg="Failed to destroy network for sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.148809 kubelet[2938]: E0214 01:11:04.147206 2938 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.148809 kubelet[2938]: E0214 01:11:04.147304 2938 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.148809 kubelet[2938]: E0214 01:11:04.147206 2938 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.148809 kubelet[2938]: E0214 01:11:04.147386 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vgn94" Feb 14 01:11:04.149090 containerd[1629]: time="2025-02-14T01:11:04.148622637Z" level=error msg="encountered an error cleaning up failed sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.149090 containerd[1629]: time="2025-02-14T01:11:04.148669321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7654bfc7bc-qthrw,Uid:2ec0bfed-3539-4181-a415-bf87b9386eac,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.149228 kubelet[2938]: E0214 01:11:04.147429 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7654bfc7bc-jvfgq" Feb 14 01:11:04.149228 kubelet[2938]: E0214 01:11:04.147480 2938 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7654bfc7bc-jvfgq" Feb 14 01:11:04.149228 kubelet[2938]: E0214 01:11:04.147586 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7654bfc7bc-jvfgq_calico-apiserver(7bc0044f-26fb-4412-b84f-e458618cf4b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7654bfc7bc-jvfgq_calico-apiserver(7bc0044f-26fb-4412-b84f-e458618cf4b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7654bfc7bc-jvfgq" podUID="7bc0044f-26fb-4412-b84f-e458618cf4b4" Feb 14 01:11:04.149430 kubelet[2938]: E0214 01:11:04.147386 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6496b8966d-qmcxb" Feb 14 01:11:04.149430 kubelet[2938]: E0214 01:11:04.147653 2938 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6496b8966d-qmcxb" Feb 14 01:11:04.149430 kubelet[2938]: E0214 01:11:04.147691 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6496b8966d-qmcxb_calico-system(8293e9cd-d766-48c9-bffd-cc310990d080)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6496b8966d-qmcxb_calico-system(8293e9cd-d766-48c9-bffd-cc310990d080)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6496b8966d-qmcxb" podUID="8293e9cd-d766-48c9-bffd-cc310990d080" Feb 14 01:11:04.150012 kubelet[2938]: E0214 01:11:04.147230 2938 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.150012 kubelet[2938]: E0214 01:11:04.147745 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rjd7n" Feb 14 01:11:04.150012 kubelet[2938]: E0214 01:11:04.147774 2938 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rjd7n" Feb 14 01:11:04.150233 kubelet[2938]: E0214 01:11:04.147811 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rjd7n_kube-system(b1f23b66-09c4-4269-b990-1ebe27f4a53b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rjd7n_kube-system(b1f23b66-09c4-4269-b990-1ebe27f4a53b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rjd7n" podUID="b1f23b66-09c4-4269-b990-1ebe27f4a53b" Feb 14 01:11:04.150233 kubelet[2938]: E0214 01:11:04.147439 2938 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vgn94" Feb 14 01:11:04.150233 kubelet[2938]: E0214 01:11:04.147861 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vgn94_kube-system(9cb84537-ce23-40cb-9b1d-10b46a87ae08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vgn94_kube-system(9cb84537-ce23-40cb-9b1d-10b46a87ae08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vgn94" podUID="9cb84537-ce23-40cb-9b1d-10b46a87ae08" Feb 14 01:11:04.150810 kubelet[2938]: E0214 01:11:04.149593 2938 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.150810 kubelet[2938]: E0214 01:11:04.149636 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7654bfc7bc-qthrw" Feb 14 01:11:04.150810 kubelet[2938]: E0214 01:11:04.149659 2938 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7654bfc7bc-qthrw" Feb 14 01:11:04.151013 kubelet[2938]: E0214 01:11:04.149704 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7654bfc7bc-qthrw_calico-apiserver(2ec0bfed-3539-4181-a415-bf87b9386eac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7654bfc7bc-qthrw_calico-apiserver(2ec0bfed-3539-4181-a415-bf87b9386eac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7654bfc7bc-qthrw" podUID="2ec0bfed-3539-4181-a415-bf87b9386eac" Feb 14 01:11:04.609163 kubelet[2938]: I0214 01:11:04.609117 2938 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:11:04.613684 kubelet[2938]: I0214 01:11:04.611786 2938 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:04.617630 containerd[1629]: time="2025-02-14T01:11:04.616620737Z" level=info msg="StopPodSandbox for \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\"" Feb 14 01:11:04.618128 containerd[1629]: time="2025-02-14T01:11:04.618094285Z" level=info msg="StopPodSandbox for \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\"" Feb 14 01:11:04.618738 containerd[1629]: time="2025-02-14T01:11:04.618708922Z" level=info msg="Ensure that sandbox d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310 in task-service has been cleanup successfully" Feb 14 01:11:04.619370 containerd[1629]: time="2025-02-14T01:11:04.618728181Z" level=info msg="Ensure that sandbox 1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee in task-service has been cleanup successfully" Feb 14 01:11:04.622051 kubelet[2938]: I0214 01:11:04.621644 2938 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:04.622899 containerd[1629]: time="2025-02-14T01:11:04.622861432Z" level=info msg="StopPodSandbox for \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\"" Feb 14 01:11:04.624041 containerd[1629]: time="2025-02-14T01:11:04.623825436Z" level=info msg="Ensure that sandbox 9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b in task-service has been cleanup successfully" Feb 14 01:11:04.627066 kubelet[2938]: I0214 01:11:04.627035 2938 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:04.630340 containerd[1629]: time="2025-02-14T01:11:04.630229571Z" level=info msg="StopPodSandbox for \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\"" Feb 14 01:11:04.630873 containerd[1629]: time="2025-02-14T01:11:04.630538586Z" level=info msg="Ensure that sandbox 34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54 in task-service has been cleanup successfully" Feb 14 01:11:04.633163 kubelet[2938]: I0214 01:11:04.633042 2938 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:04.635102 containerd[1629]: time="2025-02-14T01:11:04.635068406Z" level=info msg="StopPodSandbox for \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\"" Feb 14 01:11:04.636168 containerd[1629]: time="2025-02-14T01:11:04.636100797Z" level=info msg="Ensure that sandbox 45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416 in task-service has been cleanup successfully" Feb 14 01:11:04.724352 containerd[1629]: time="2025-02-14T01:11:04.724266322Z" level=error msg="StopPodSandbox for \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\" failed" error="failed to destroy network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.725079 kubelet[2938]: E0214 01:11:04.724642 2938 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:11:04.725079 kubelet[2938]: E0214 01:11:04.724746 2938 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee"} Feb 14 01:11:04.725079 kubelet[2938]: E0214 01:11:04.724871 2938 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bc0044f-26fb-4412-b84f-e458618cf4b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:11:04.725079 kubelet[2938]: E0214 01:11:04.724918 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bc0044f-26fb-4412-b84f-e458618cf4b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7654bfc7bc-jvfgq" podUID="7bc0044f-26fb-4412-b84f-e458618cf4b4" Feb 14 01:11:04.745843 containerd[1629]: time="2025-02-14T01:11:04.745746875Z" level=error msg="StopPodSandbox for \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\" failed" error="failed to destroy network for sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.746574 kubelet[2938]: E0214 01:11:04.746254 2938 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:04.746574 kubelet[2938]: E0214 01:11:04.746356 2938 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54"} Feb 14 01:11:04.746574 kubelet[2938]: E0214 01:11:04.746406 2938 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9cb84537-ce23-40cb-9b1d-10b46a87ae08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:11:04.746574 kubelet[2938]: E0214 01:11:04.746463 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9cb84537-ce23-40cb-9b1d-10b46a87ae08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vgn94" podUID="9cb84537-ce23-40cb-9b1d-10b46a87ae08" Feb 14 01:11:04.747479 containerd[1629]: time="2025-02-14T01:11:04.746644656Z" level=error msg="StopPodSandbox for \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\" failed" error="failed to destroy network for sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.747900 kubelet[2938]: E0214 01:11:04.747744 2938 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:04.747900 kubelet[2938]: E0214 01:11:04.747800 2938 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b"} Feb 14 01:11:04.747900 kubelet[2938]: E0214 01:11:04.747838 2938 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8293e9cd-d766-48c9-bffd-cc310990d080\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:11:04.747900 kubelet[2938]: E0214 01:11:04.747865 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8293e9cd-d766-48c9-bffd-cc310990d080\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6496b8966d-qmcxb" podUID="8293e9cd-d766-48c9-bffd-cc310990d080" Feb 14 01:11:04.748813 containerd[1629]: time="2025-02-14T01:11:04.748624080Z" level=error msg="StopPodSandbox for \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\" failed" error="failed to destroy network for sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.749062 kubelet[2938]: E0214 01:11:04.748995 2938 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:04.749149 kubelet[2938]: E0214 01:11:04.749074 2938 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310"} Feb 14 01:11:04.749149 kubelet[2938]: E0214 01:11:04.749124 2938 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ec0bfed-3539-4181-a415-bf87b9386eac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:11:04.749304 kubelet[2938]: E0214 01:11:04.749161 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ec0bfed-3539-4181-a415-bf87b9386eac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7654bfc7bc-qthrw" podUID="2ec0bfed-3539-4181-a415-bf87b9386eac" Feb 14 01:11:04.765400 containerd[1629]: time="2025-02-14T01:11:04.765341798Z" level=error msg="StopPodSandbox for \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\" failed" error="failed to destroy network for sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:04.766102 kubelet[2938]: E0214 01:11:04.765862 2938 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:04.766102 kubelet[2938]: E0214 01:11:04.765965 2938 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416"} Feb 14 01:11:04.766102 kubelet[2938]: E0214 01:11:04.766018 2938 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1f23b66-09c4-4269-b990-1ebe27f4a53b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:11:04.766102 kubelet[2938]: E0214 01:11:04.766052 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1f23b66-09c4-4269-b990-1ebe27f4a53b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rjd7n" podUID="b1f23b66-09c4-4269-b990-1ebe27f4a53b" Feb 14 01:11:05.323410 containerd[1629]: time="2025-02-14T01:11:05.322713464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mpl8k,Uid:f1c564fe-b330-4d68-9185-0946498c1f87,Namespace:calico-system,Attempt:0,}" Feb 14 01:11:05.455514 containerd[1629]: time="2025-02-14T01:11:05.455378084Z" level=error msg="Failed to destroy network for sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:05.456480 containerd[1629]: time="2025-02-14T01:11:05.456214111Z" level=error msg="encountered an error cleaning up failed sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:05.456480 containerd[1629]: time="2025-02-14T01:11:05.456308126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mpl8k,Uid:f1c564fe-b330-4d68-9185-0946498c1f87,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:05.458783 kubelet[2938]: E0214 01:11:05.458706 2938 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:05.458907 kubelet[2938]: E0214 01:11:05.458828 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mpl8k" Feb 14 01:11:05.458907 kubelet[2938]: E0214 01:11:05.458887 2938 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mpl8k" Feb 14 01:11:05.465732 kubelet[2938]: E0214 01:11:05.458993 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mpl8k_calico-system(f1c564fe-b330-4d68-9185-0946498c1f87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mpl8k_calico-system(f1c564fe-b330-4d68-9185-0946498c1f87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mpl8k" podUID="f1c564fe-b330-4d68-9185-0946498c1f87" Feb 14 01:11:05.461383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819-shm.mount: Deactivated successfully. Feb 14 01:11:05.640847 kubelet[2938]: I0214 01:11:05.640598 2938 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:05.643068 containerd[1629]: time="2025-02-14T01:11:05.642970724Z" level=info msg="StopPodSandbox for \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\"" Feb 14 01:11:05.643843 containerd[1629]: time="2025-02-14T01:11:05.643330613Z" level=info msg="Ensure that sandbox 01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819 in task-service has been cleanup successfully" Feb 14 01:11:05.711035 containerd[1629]: time="2025-02-14T01:11:05.710872933Z" level=error msg="StopPodSandbox for \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\" failed" error="failed to destroy network for sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:05.711611 kubelet[2938]: E0214 01:11:05.711293 2938 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:05.711611 kubelet[2938]: E0214 01:11:05.711405 2938 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819"} Feb 14 01:11:05.711611 kubelet[2938]: E0214 01:11:05.711557 2938 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f1c564fe-b330-4d68-9185-0946498c1f87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:11:05.712001 kubelet[2938]: E0214 01:11:05.711623 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f1c564fe-b330-4d68-9185-0946498c1f87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mpl8k" podUID="f1c564fe-b330-4d68-9185-0946498c1f87" Feb 14 01:11:13.978349 systemd-resolved[1516]: Under memory pressure, flushing caches. Feb 14 01:11:13.985926 systemd-journald[1165]: Under memory pressure, flushing caches. Feb 14 01:11:13.978538 systemd-resolved[1516]: Flushed all caches. Feb 14 01:11:14.760675 kubelet[2938]: I0214 01:11:14.760068 2938 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:11:15.519353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount412649730.mount: Deactivated successfully. Feb 14 01:11:15.594066 containerd[1629]: time="2025-02-14T01:11:15.593963154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:15.601500 containerd[1629]: time="2025-02-14T01:11:15.601332966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 14 01:11:15.667243 containerd[1629]: time="2025-02-14T01:11:15.665928454Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:15.668808 containerd[1629]: time="2025-02-14T01:11:15.668765865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:15.675889 containerd[1629]: time="2025-02-14T01:11:15.675843202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 12.058180415s" Feb 14 01:11:15.675998 containerd[1629]: time="2025-02-14T01:11:15.675900081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 14 01:11:15.716250 containerd[1629]: time="2025-02-14T01:11:15.715945462Z" level=info msg="CreateContainer within sandbox \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 14 01:11:15.794371 containerd[1629]: time="2025-02-14T01:11:15.794189032Z" level=info msg="CreateContainer within sandbox \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\"" Feb 14 01:11:15.801857 containerd[1629]: time="2025-02-14T01:11:15.801718679Z" level=info msg="StartContainer for \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\"" Feb 14 01:11:15.977845 containerd[1629]: time="2025-02-14T01:11:15.977655510Z" level=info msg="StartContainer for \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\" returns successfully" Feb 14 01:11:16.031240 systemd-journald[1165]: Under memory pressure, flushing caches. Feb 14 01:11:16.029272 systemd-resolved[1516]: Under memory pressure, flushing caches. Feb 14 01:11:16.029296 systemd-resolved[1516]: Flushed all caches. Feb 14 01:11:16.277040 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 14 01:11:16.277628 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 14 01:11:16.322727 containerd[1629]: time="2025-02-14T01:11:16.320346967Z" level=info msg="StopPodSandbox for \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\"" Feb 14 01:11:16.452479 containerd[1629]: time="2025-02-14T01:11:16.451086814Z" level=error msg="StopPodSandbox for \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\" failed" error="failed to destroy network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:11:16.453660 kubelet[2938]: E0214 01:11:16.452910 2938 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:11:16.453660 kubelet[2938]: E0214 01:11:16.453055 2938 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee"} Feb 14 01:11:16.453660 kubelet[2938]: E0214 01:11:16.453131 2938 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bc0044f-26fb-4412-b84f-e458618cf4b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:11:16.467745 kubelet[2938]: E0214 01:11:16.467626 2938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bc0044f-26fb-4412-b84f-e458618cf4b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7654bfc7bc-jvfgq" podUID="7bc0044f-26fb-4412-b84f-e458618cf4b4" Feb 14 01:11:16.771485 kubelet[2938]: I0214 01:11:16.746981 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xlprx" podStartSLOduration=2.734754233 podStartE2EDuration="28.725841042s" podCreationTimestamp="2025-02-14 01:10:48 +0000 UTC" firstStartedPulling="2025-02-14 01:10:49.689303963 +0000 UTC m=+23.557350475" lastFinishedPulling="2025-02-14 01:11:15.680390772 +0000 UTC m=+49.548437284" observedRunningTime="2025-02-14 01:11:16.723478331 +0000 UTC m=+50.591524866" watchObservedRunningTime="2025-02-14 01:11:16.725841042 +0000 UTC m=+50.593887562" Feb 14 01:11:17.315783 containerd[1629]: time="2025-02-14T01:11:17.315707302Z" level=info msg="StopPodSandbox for \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\"" Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.403 [INFO][4098] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.406 [INFO][4098] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" iface="eth0" netns="/var/run/netns/cni-ca0595c0-63bb-c6d9-9436-8caf9a9ecc84" Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.407 [INFO][4098] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" iface="eth0" netns="/var/run/netns/cni-ca0595c0-63bb-c6d9-9436-8caf9a9ecc84" Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.408 [INFO][4098] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" iface="eth0" netns="/var/run/netns/cni-ca0595c0-63bb-c6d9-9436-8caf9a9ecc84" Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.408 [INFO][4098] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.408 [INFO][4098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.755 [INFO][4104] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" HandleID="k8s-pod-network.34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.758 [INFO][4104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.758 [INFO][4104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.779 [WARNING][4104] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" HandleID="k8s-pod-network.34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.779 [INFO][4104] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" HandleID="k8s-pod-network.34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.782 [INFO][4104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:17.789044 containerd[1629]: 2025-02-14 01:11:17.786 [INFO][4098] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:17.791387 containerd[1629]: time="2025-02-14T01:11:17.789254906Z" level=info msg="TearDown network for sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\" successfully" Feb 14 01:11:17.791387 containerd[1629]: time="2025-02-14T01:11:17.789300200Z" level=info msg="StopPodSandbox for \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\" returns successfully" Feb 14 01:11:17.793822 containerd[1629]: time="2025-02-14T01:11:17.793783134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vgn94,Uid:9cb84537-ce23-40cb-9b1d-10b46a87ae08,Namespace:kube-system,Attempt:1,}" Feb 14 01:11:17.794778 systemd[1]: run-netns-cni\x2dca0595c0\x2d63bb\x2dc6d9\x2d9436\x2d8caf9a9ecc84.mount: Deactivated successfully. Feb 14 01:11:17.812695 systemd[1]: run-containerd-runc-k8s.io-4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9-runc.11itVI.mount: Deactivated successfully. Feb 14 01:11:18.216263 systemd-networkd[1261]: calif43e7da0f02: Link UP Feb 14 01:11:18.237291 systemd-networkd[1261]: calif43e7da0f02: Gained carrier Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:17.991 [INFO][4151] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.019 [INFO][4151] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0 coredns-7db6d8ff4d- kube-system 9cb84537-ce23-40cb-9b1d-10b46a87ae08 799 0 2025-02-14 01:10:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-krhnz.gb1.brightbox.com coredns-7db6d8ff4d-vgn94 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif43e7da0f02 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vgn94" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.020 [INFO][4151] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vgn94" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.097 [INFO][4218] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" HandleID="k8s-pod-network.b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.117 [INFO][4218] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" HandleID="k8s-pod-network.b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003194a0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-krhnz.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-vgn94", "timestamp":"2025-02-14 01:11:18.097281536 +0000 UTC"}, Hostname:"srv-krhnz.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.117 [INFO][4218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.118 [INFO][4218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.118 [INFO][4218] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-krhnz.gb1.brightbox.com' Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.122 [INFO][4218] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.131 [INFO][4218] ipam/ipam.go 372: Looking up existing affinities for host host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.139 [INFO][4218] ipam/ipam.go 489: Trying affinity for 192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.142 [INFO][4218] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.147 [INFO][4218] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.147 [INFO][4218] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.149 [INFO][4218] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0 Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.156 [INFO][4218] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.166 [INFO][4218] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.193/26] block=192.168.126.192/26 handle="k8s-pod-network.b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.166 [INFO][4218] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.193/26] handle="k8s-pod-network.b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.167 [INFO][4218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:18.292075 containerd[1629]: 2025-02-14 01:11:18.167 [INFO][4218] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.193/26] IPv6=[] ContainerID="b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" HandleID="k8s-pod-network.b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:18.294392 containerd[1629]: 2025-02-14 01:11:18.174 [INFO][4151] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vgn94" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9cb84537-ce23-40cb-9b1d-10b46a87ae08", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-vgn94", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif43e7da0f02", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:18.294392 containerd[1629]: 2025-02-14 01:11:18.175 [INFO][4151] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.193/32] ContainerID="b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vgn94" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:18.294392 containerd[1629]: 2025-02-14 01:11:18.176 [INFO][4151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif43e7da0f02 ContainerID="b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vgn94" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:18.294392 containerd[1629]: 2025-02-14 01:11:18.239 [INFO][4151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vgn94" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:18.294392 containerd[1629]: 2025-02-14 01:11:18.244 [INFO][4151] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vgn94" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9cb84537-ce23-40cb-9b1d-10b46a87ae08", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0", Pod:"coredns-7db6d8ff4d-vgn94", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif43e7da0f02", MAC:"66:60:67:c9:05:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:18.294392 containerd[1629]: 2025-02-14 01:11:18.280 [INFO][4151] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vgn94" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:18.328063 containerd[1629]: time="2025-02-14T01:11:18.327590263Z" level=info msg="StopPodSandbox for \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\"" Feb 14 01:11:18.334780 containerd[1629]: time="2025-02-14T01:11:18.333583419Z" level=info msg="StopPodSandbox for \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\"" Feb 14 01:11:18.603416 containerd[1629]: time="2025-02-14T01:11:18.601603645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:11:18.609584 containerd[1629]: time="2025-02-14T01:11:18.607861179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:11:18.609584 containerd[1629]: time="2025-02-14T01:11:18.607907010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:18.613538 containerd[1629]: time="2025-02-14T01:11:18.613294547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:18.770623 kernel: bpftool[4342]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 14 01:11:18.934482 containerd[1629]: time="2025-02-14T01:11:18.930611975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vgn94,Uid:9cb84537-ce23-40cb-9b1d-10b46a87ae08,Namespace:kube-system,Attempt:1,} returns sandbox id \"b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0\"" Feb 14 01:11:18.974243 containerd[1629]: time="2025-02-14T01:11:18.973244867Z" level=info msg="CreateContainer within sandbox \"b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 01:11:19.027293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197337508.mount: Deactivated successfully. Feb 14 01:11:19.051026 containerd[1629]: time="2025-02-14T01:11:19.050198784Z" level=info msg="CreateContainer within sandbox \"b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdb48e13a3bbe1a2d795225fdb9f0b8c3ee8d0a2d0353fc619be48cf3ffefc63\"" Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:18.720 [INFO][4271] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:18.720 [INFO][4271] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" iface="eth0" netns="/var/run/netns/cni-921e4f00-fd0a-9915-ba19-a34c8ba00459" Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:18.721 [INFO][4271] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" iface="eth0" netns="/var/run/netns/cni-921e4f00-fd0a-9915-ba19-a34c8ba00459" Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:18.737 [INFO][4271] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" iface="eth0" netns="/var/run/netns/cni-921e4f00-fd0a-9915-ba19-a34c8ba00459" Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:18.737 [INFO][4271] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:18.737 [INFO][4271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:18.994 [INFO][4336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" HandleID="k8s-pod-network.d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:18.994 [INFO][4336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:18.994 [INFO][4336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:19.031 [WARNING][4336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" HandleID="k8s-pod-network.d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:19.031 [INFO][4336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" HandleID="k8s-pod-network.d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:19.039 [INFO][4336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:19.053163 containerd[1629]: 2025-02-14 01:11:19.046 [INFO][4271] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:19.055853 containerd[1629]: time="2025-02-14T01:11:19.052420147Z" level=info msg="StartContainer for \"fdb48e13a3bbe1a2d795225fdb9f0b8c3ee8d0a2d0353fc619be48cf3ffefc63\"" Feb 14 01:11:19.055918 containerd[1629]: time="2025-02-14T01:11:19.054339974Z" level=info msg="TearDown network for sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\" successfully" Feb 14 01:11:19.055918 containerd[1629]: time="2025-02-14T01:11:19.055886921Z" level=info msg="StopPodSandbox for \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\" returns successfully" Feb 14 01:11:19.058818 containerd[1629]: time="2025-02-14T01:11:19.058694204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7654bfc7bc-qthrw,Uid:2ec0bfed-3539-4181-a415-bf87b9386eac,Namespace:calico-apiserver,Attempt:1,}" Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:18.730 [INFO][4279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:18.731 [INFO][4279] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" iface="eth0" netns="/var/run/netns/cni-4178acae-b67b-c70f-eb32-99aa60126a28" Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:18.732 [INFO][4279] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" iface="eth0" netns="/var/run/netns/cni-4178acae-b67b-c70f-eb32-99aa60126a28" Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:18.745 [INFO][4279] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" iface="eth0" netns="/var/run/netns/cni-4178acae-b67b-c70f-eb32-99aa60126a28" Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:18.745 [INFO][4279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:18.745 [INFO][4279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:18.996 [INFO][4343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" HandleID="k8s-pod-network.01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:18.996 [INFO][4343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:19.040 [INFO][4343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:19.062 [WARNING][4343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" HandleID="k8s-pod-network.01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:19.062 [INFO][4343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" HandleID="k8s-pod-network.01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:19.065 [INFO][4343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:19.084994 containerd[1629]: 2025-02-14 01:11:19.073 [INFO][4279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:19.087700 containerd[1629]: time="2025-02-14T01:11:19.086700922Z" level=info msg="TearDown network for sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\" successfully" Feb 14 01:11:19.087700 containerd[1629]: time="2025-02-14T01:11:19.086735338Z" level=info msg="StopPodSandbox for \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\" returns successfully" Feb 14 01:11:19.091718 containerd[1629]: time="2025-02-14T01:11:19.090710282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mpl8k,Uid:f1c564fe-b330-4d68-9185-0946498c1f87,Namespace:calico-system,Attempt:1,}" Feb 14 01:11:19.333478 containerd[1629]: time="2025-02-14T01:11:19.333408909Z" level=info msg="StartContainer for \"fdb48e13a3bbe1a2d795225fdb9f0b8c3ee8d0a2d0353fc619be48cf3ffefc63\" returns successfully" Feb 14 01:11:19.400613 containerd[1629]: time="2025-02-14T01:11:19.398440451Z" level=info msg="StopPodSandbox for \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\"" Feb 14 01:11:19.519322 systemd-networkd[1261]: cali0a786b94afc: Link UP Feb 14 01:11:19.527553 systemd-networkd[1261]: cali0a786b94afc: Gained carrier Feb 14 01:11:19.550052 systemd-networkd[1261]: calif43e7da0f02: Gained IPv6LL Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.225 [INFO][4405] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0 calico-apiserver-7654bfc7bc- calico-apiserver 2ec0bfed-3539-4181-a415-bf87b9386eac 808 0 2025-02-14 01:10:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7654bfc7bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-krhnz.gb1.brightbox.com calico-apiserver-7654bfc7bc-qthrw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0a786b94afc [] []}} ContainerID="399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-qthrw" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.228 [INFO][4405] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-qthrw" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.352 [INFO][4449] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" HandleID="k8s-pod-network.399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.373 [INFO][4449] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" HandleID="k8s-pod-network.399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000381c60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-krhnz.gb1.brightbox.com", "pod":"calico-apiserver-7654bfc7bc-qthrw", "timestamp":"2025-02-14 01:11:19.352868394 +0000 UTC"}, Hostname:"srv-krhnz.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.373 [INFO][4449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.373 [INFO][4449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.373 [INFO][4449] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-krhnz.gb1.brightbox.com' Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.378 [INFO][4449] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.397 [INFO][4449] ipam/ipam.go 372: Looking up existing affinities for host host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.412 [INFO][4449] ipam/ipam.go 489: Trying affinity for 192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.419 [INFO][4449] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.430 [INFO][4449] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.431 [INFO][4449] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.437 [INFO][4449] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.449 [INFO][4449] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.477 [INFO][4449] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.194/26] block=192.168.126.192/26 handle="k8s-pod-network.399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.477 [INFO][4449] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.194/26] handle="k8s-pod-network.399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.477 [INFO][4449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:19.615100 containerd[1629]: 2025-02-14 01:11:19.477 [INFO][4449] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.194/26] IPv6=[] ContainerID="399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" HandleID="k8s-pod-network.399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:19.619040 containerd[1629]: 2025-02-14 01:11:19.485 [INFO][4405] cni-plugin/k8s.go 386: Populated endpoint ContainerID="399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-qthrw" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0", GenerateName:"calico-apiserver-7654bfc7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ec0bfed-3539-4181-a415-bf87b9386eac", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7654bfc7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7654bfc7bc-qthrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a786b94afc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:19.619040 containerd[1629]: 2025-02-14 01:11:19.486 [INFO][4405] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.194/32] ContainerID="399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-qthrw" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:19.619040 containerd[1629]: 2025-02-14 01:11:19.486 [INFO][4405] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a786b94afc ContainerID="399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-qthrw" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:19.619040 containerd[1629]: 2025-02-14 01:11:19.531 [INFO][4405] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-qthrw" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:19.619040 containerd[1629]: 2025-02-14 01:11:19.536 [INFO][4405] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-qthrw" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0", GenerateName:"calico-apiserver-7654bfc7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ec0bfed-3539-4181-a415-bf87b9386eac", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7654bfc7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e", Pod:"calico-apiserver-7654bfc7bc-qthrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a786b94afc", MAC:"fe:c3:a1:6b:a1:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:19.619040 containerd[1629]: 2025-02-14 01:11:19.606 [INFO][4405] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-qthrw" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:19.728492 containerd[1629]: time="2025-02-14T01:11:19.726695377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:11:19.728492 containerd[1629]: time="2025-02-14T01:11:19.726782270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:11:19.728492 containerd[1629]: time="2025-02-14T01:11:19.726824535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:19.733794 containerd[1629]: time="2025-02-14T01:11:19.732761844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:19.807346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149245550.mount: Deactivated successfully. Feb 14 01:11:19.807911 systemd[1]: run-netns-cni\x2d4178acae\x2db67b\x2dc70f\x2deb32\x2d99aa60126a28.mount: Deactivated successfully. Feb 14 01:11:19.808466 systemd[1]: run-netns-cni\x2d921e4f00\x2dfd0a\x2d9915\x2dba19\x2da34c8ba00459.mount: Deactivated successfully. Feb 14 01:11:19.829862 kubelet[2938]: I0214 01:11:19.828977 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vgn94" podStartSLOduration=38.828937607 podStartE2EDuration="38.828937607s" podCreationTimestamp="2025-02-14 01:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:11:19.828344438 +0000 UTC m=+53.696390963" watchObservedRunningTime="2025-02-14 01:11:19.828937607 +0000 UTC m=+53.696984128" Feb 14 01:11:19.875428 systemd-networkd[1261]: cali66f7905b586: Link UP Feb 14 01:11:19.886460 systemd-networkd[1261]: cali66f7905b586: Gained carrier Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.342 [INFO][4430] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0 csi-node-driver- calico-system f1c564fe-b330-4d68-9185-0946498c1f87 809 0 2025-02-14 01:10:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-krhnz.gb1.brightbox.com csi-node-driver-mpl8k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali66f7905b586 [] []}} ContainerID="ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" Namespace="calico-system" Pod="csi-node-driver-mpl8k" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.342 [INFO][4430] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" Namespace="calico-system" Pod="csi-node-driver-mpl8k" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.470 [INFO][4466] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" HandleID="k8s-pod-network.ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.548 [INFO][4466] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" HandleID="k8s-pod-network.ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050cf0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-krhnz.gb1.brightbox.com", "pod":"csi-node-driver-mpl8k", "timestamp":"2025-02-14 01:11:19.468624309 +0000 UTC"}, Hostname:"srv-krhnz.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.550 [INFO][4466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.551 [INFO][4466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.552 [INFO][4466] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-krhnz.gb1.brightbox.com' Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.569 [INFO][4466] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.603 [INFO][4466] ipam/ipam.go 372: Looking up existing affinities for host host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.650 [INFO][4466] ipam/ipam.go 489: Trying affinity for 192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.659 [INFO][4466] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.675 [INFO][4466] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.677 [INFO][4466] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.688 [INFO][4466] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.719 [INFO][4466] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.802 [INFO][4466] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.195/26] block=192.168.126.192/26 handle="k8s-pod-network.ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.806 [INFO][4466] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.195/26] handle="k8s-pod-network.ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.824 [INFO][4466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:19.956711 containerd[1629]: 2025-02-14 01:11:19.829 [INFO][4466] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.195/26] IPv6=[] ContainerID="ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" HandleID="k8s-pod-network.ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:19.958220 containerd[1629]: 2025-02-14 01:11:19.850 [INFO][4430] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" Namespace="calico-system" Pod="csi-node-driver-mpl8k" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1c564fe-b330-4d68-9185-0946498c1f87", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-mpl8k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66f7905b586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:19.958220 containerd[1629]: 2025-02-14 01:11:19.850 [INFO][4430] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.195/32] ContainerID="ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" Namespace="calico-system" Pod="csi-node-driver-mpl8k" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:19.958220 containerd[1629]: 2025-02-14 01:11:19.850 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66f7905b586 ContainerID="ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" Namespace="calico-system" Pod="csi-node-driver-mpl8k" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:19.958220 containerd[1629]: 2025-02-14 01:11:19.902 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" Namespace="calico-system" Pod="csi-node-driver-mpl8k" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:19.958220 containerd[1629]: 2025-02-14 01:11:19.927 [INFO][4430] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" Namespace="calico-system" Pod="csi-node-driver-mpl8k" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1c564fe-b330-4d68-9185-0946498c1f87", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb", Pod:"csi-node-driver-mpl8k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66f7905b586", MAC:"f2:e5:20:6c:51:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:19.958220 containerd[1629]: 2025-02-14 01:11:19.948 [INFO][4430] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb" Namespace="calico-system" Pod="csi-node-driver-mpl8k" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:20.015481 containerd[1629]: time="2025-02-14T01:11:20.013645400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:11:20.016398 containerd[1629]: time="2025-02-14T01:11:20.014146122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:11:20.016398 containerd[1629]: time="2025-02-14T01:11:20.015675015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:20.016398 containerd[1629]: time="2025-02-14T01:11:20.015819021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:19.802 [INFO][4487] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:19.820 [INFO][4487] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" iface="eth0" netns="/var/run/netns/cni-3b196a7b-e51f-609a-e6e2-d8d7bf478577" Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:19.826 [INFO][4487] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" iface="eth0" netns="/var/run/netns/cni-3b196a7b-e51f-609a-e6e2-d8d7bf478577" Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:19.831 [INFO][4487] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" iface="eth0" netns="/var/run/netns/cni-3b196a7b-e51f-609a-e6e2-d8d7bf478577" Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:19.831 [INFO][4487] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:19.832 [INFO][4487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:20.085 [INFO][4531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" HandleID="k8s-pod-network.45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:20.086 [INFO][4531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:20.086 [INFO][4531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:20.107 [WARNING][4531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" HandleID="k8s-pod-network.45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:20.107 [INFO][4531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" HandleID="k8s-pod-network.45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:20.111 [INFO][4531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:20.119491 containerd[1629]: 2025-02-14 01:11:20.115 [INFO][4487] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:20.134817 containerd[1629]: time="2025-02-14T01:11:20.129608778Z" level=info msg="TearDown network for sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\" successfully" Feb 14 01:11:20.134817 containerd[1629]: time="2025-02-14T01:11:20.134602773Z" level=info msg="StopPodSandbox for \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\" returns successfully" Feb 14 01:11:20.139216 containerd[1629]: time="2025-02-14T01:11:20.136997852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rjd7n,Uid:b1f23b66-09c4-4269-b990-1ebe27f4a53b,Namespace:kube-system,Attempt:1,}" Feb 14 01:11:20.140973 systemd[1]: run-netns-cni\x2d3b196a7b\x2de51f\x2d609a\x2de6e2\x2dd8d7bf478577.mount: Deactivated successfully. Feb 14 01:11:20.199959 containerd[1629]: time="2025-02-14T01:11:20.199899327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7654bfc7bc-qthrw,Uid:2ec0bfed-3539-4181-a415-bf87b9386eac,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e\"" Feb 14 01:11:20.212801 containerd[1629]: time="2025-02-14T01:11:20.212758348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 14 01:11:20.277176 containerd[1629]: time="2025-02-14T01:11:20.277019101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mpl8k,Uid:f1c564fe-b330-4d68-9185-0946498c1f87,Namespace:calico-system,Attempt:1,} returns sandbox id \"ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb\"" Feb 14 01:11:20.323650 containerd[1629]: time="2025-02-14T01:11:20.323425484Z" level=info msg="StopPodSandbox for \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\"" Feb 14 01:11:20.412645 systemd-networkd[1261]: vxlan.calico: Link UP Feb 14 01:11:20.412660 systemd-networkd[1261]: vxlan.calico: Gained carrier Feb 14 01:11:20.568599 systemd-networkd[1261]: cali0a786b94afc: Gained IPv6LL Feb 14 01:11:20.734003 systemd-networkd[1261]: calic959f904f45: Link UP Feb 14 01:11:20.738870 systemd-networkd[1261]: calic959f904f45: Gained carrier Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.256 [INFO][4597] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0 coredns-7db6d8ff4d- kube-system b1f23b66-09c4-4269-b990-1ebe27f4a53b 822 0 2025-02-14 01:10:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-krhnz.gb1.brightbox.com coredns-7db6d8ff4d-rjd7n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic959f904f45 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rjd7n" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.257 [INFO][4597] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rjd7n" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.506 [INFO][4633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" HandleID="k8s-pod-network.3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.575 [INFO][4633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" HandleID="k8s-pod-network.3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000398b90), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-krhnz.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-rjd7n", "timestamp":"2025-02-14 01:11:20.505055976 +0000 UTC"}, Hostname:"srv-krhnz.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.575 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.575 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.575 [INFO][4633] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-krhnz.gb1.brightbox.com' Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.583 [INFO][4633] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.604 [INFO][4633] ipam/ipam.go 372: Looking up existing affinities for host host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.631 [INFO][4633] ipam/ipam.go 489: Trying affinity for 192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.645 [INFO][4633] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.661 [INFO][4633] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.661 [INFO][4633] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.667 [INFO][4633] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.700 [INFO][4633] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.719 [INFO][4633] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.196/26] block=192.168.126.192/26 handle="k8s-pod-network.3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.719 [INFO][4633] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.196/26] handle="k8s-pod-network.3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.719 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:20.795633 containerd[1629]: 2025-02-14 01:11:20.719 [INFO][4633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.196/26] IPv6=[] ContainerID="3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" HandleID="k8s-pod-network.3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:20.797146 containerd[1629]: 2025-02-14 01:11:20.725 [INFO][4597] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rjd7n" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b1f23b66-09c4-4269-b990-1ebe27f4a53b", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-rjd7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic959f904f45", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:20.797146 containerd[1629]: 2025-02-14 01:11:20.725 [INFO][4597] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.196/32] ContainerID="3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rjd7n" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:20.797146 containerd[1629]: 2025-02-14 01:11:20.726 [INFO][4597] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic959f904f45 ContainerID="3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rjd7n" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:20.797146 containerd[1629]: 2025-02-14 01:11:20.742 [INFO][4597] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rjd7n" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:20.797146 containerd[1629]: 2025-02-14 01:11:20.750 [INFO][4597] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rjd7n" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b1f23b66-09c4-4269-b990-1ebe27f4a53b", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b", Pod:"coredns-7db6d8ff4d-rjd7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic959f904f45", MAC:"62:a7:65:db:2f:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:20.797146 containerd[1629]: 2025-02-14 01:11:20.773 [INFO][4597] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rjd7n" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.549 [INFO][4653] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.551 [INFO][4653] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" iface="eth0" netns="/var/run/netns/cni-0b8c2a52-ec67-e20d-4baf-3da9a7260763" Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.551 [INFO][4653] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" iface="eth0" netns="/var/run/netns/cni-0b8c2a52-ec67-e20d-4baf-3da9a7260763" Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.552 [INFO][4653] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" iface="eth0" netns="/var/run/netns/cni-0b8c2a52-ec67-e20d-4baf-3da9a7260763" Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.552 [INFO][4653] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.552 [INFO][4653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.775 [INFO][4681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" HandleID="k8s-pod-network.9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.775 [INFO][4681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.775 [INFO][4681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.787 [WARNING][4681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" HandleID="k8s-pod-network.9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.787 [INFO][4681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" HandleID="k8s-pod-network.9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.812 [INFO][4681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:20.822484 containerd[1629]: 2025-02-14 01:11:20.818 [INFO][4653] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:20.828470 containerd[1629]: time="2025-02-14T01:11:20.825519772Z" level=info msg="TearDown network for sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\" successfully" Feb 14 01:11:20.828470 containerd[1629]: time="2025-02-14T01:11:20.825561871Z" level=info msg="StopPodSandbox for \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\" returns successfully" Feb 14 01:11:20.831718 containerd[1629]: time="2025-02-14T01:11:20.826443153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496b8966d-qmcxb,Uid:8293e9cd-d766-48c9-bffd-cc310990d080,Namespace:calico-system,Attempt:1,}" Feb 14 01:11:20.834131 systemd[1]: run-netns-cni\x2d0b8c2a52\x2dec67\x2de20d\x2d4baf\x2d3da9a7260763.mount: Deactivated successfully. Feb 14 01:11:20.903036 containerd[1629]: time="2025-02-14T01:11:20.902783943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:11:20.904157 containerd[1629]: time="2025-02-14T01:11:20.902979601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:11:20.904157 containerd[1629]: time="2025-02-14T01:11:20.903390555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:20.909759 containerd[1629]: time="2025-02-14T01:11:20.905120777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:21.115096 containerd[1629]: time="2025-02-14T01:11:21.115046157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rjd7n,Uid:b1f23b66-09c4-4269-b990-1ebe27f4a53b,Namespace:kube-system,Attempt:1,} returns sandbox id \"3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b\"" Feb 14 01:11:21.122352 containerd[1629]: time="2025-02-14T01:11:21.122217723Z" level=info msg="CreateContainer within sandbox \"3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 01:11:21.145214 containerd[1629]: time="2025-02-14T01:11:21.144219132Z" level=info msg="CreateContainer within sandbox \"3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fc92ccdc8ad671074f8b7aee535a08afb827918f920c27d6c95c6aa590afcb5\"" Feb 14 01:11:21.145729 containerd[1629]: time="2025-02-14T01:11:21.145660587Z" level=info msg="StartContainer for \"4fc92ccdc8ad671074f8b7aee535a08afb827918f920c27d6c95c6aa590afcb5\"" Feb 14 01:11:21.266755 systemd-networkd[1261]: cali9ca936e326b: Link UP Feb 14 01:11:21.272192 systemd-networkd[1261]: cali9ca936e326b: Gained carrier Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.048 [INFO][4721] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0 calico-kube-controllers-6496b8966d- calico-system 8293e9cd-d766-48c9-bffd-cc310990d080 832 0 2025-02-14 01:10:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6496b8966d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-krhnz.gb1.brightbox.com calico-kube-controllers-6496b8966d-qmcxb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9ca936e326b [] []}} ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Namespace="calico-system" Pod="calico-kube-controllers-6496b8966d-qmcxb" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.049 [INFO][4721] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Namespace="calico-system" Pod="calico-kube-controllers-6496b8966d-qmcxb" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.161 [INFO][4758] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.180 [INFO][4758] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325080), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-krhnz.gb1.brightbox.com", "pod":"calico-kube-controllers-6496b8966d-qmcxb", "timestamp":"2025-02-14 01:11:21.161357163 +0000 UTC"}, Hostname:"srv-krhnz.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.181 [INFO][4758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.181 [INFO][4758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.181 [INFO][4758] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-krhnz.gb1.brightbox.com' Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.186 [INFO][4758] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.200 [INFO][4758] ipam/ipam.go 372: Looking up existing affinities for host host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.210 [INFO][4758] ipam/ipam.go 489: Trying affinity for 192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.213 [INFO][4758] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.219 [INFO][4758] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.219 [INFO][4758] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.222 [INFO][4758] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855 Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.229 [INFO][4758] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.246 [INFO][4758] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.197/26] block=192.168.126.192/26 handle="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.246 [INFO][4758] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.197/26] handle="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.246 [INFO][4758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:21.323609 containerd[1629]: 2025-02-14 01:11:21.246 [INFO][4758] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.197/26] IPv6=[] ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:21.325182 containerd[1629]: 2025-02-14 01:11:21.257 [INFO][4721] cni-plugin/k8s.go 386: Populated endpoint ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Namespace="calico-system" Pod="calico-kube-controllers-6496b8966d-qmcxb" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0", GenerateName:"calico-kube-controllers-6496b8966d-", Namespace:"calico-system", SelfLink:"", UID:"8293e9cd-d766-48c9-bffd-cc310990d080", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496b8966d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-6496b8966d-qmcxb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9ca936e326b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:21.325182 containerd[1629]: 2025-02-14 01:11:21.257 [INFO][4721] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.197/32] ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Namespace="calico-system" Pod="calico-kube-controllers-6496b8966d-qmcxb" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:21.325182 containerd[1629]: 2025-02-14 01:11:21.257 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ca936e326b ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Namespace="calico-system" Pod="calico-kube-controllers-6496b8966d-qmcxb" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:21.325182 containerd[1629]: 2025-02-14 01:11:21.274 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Namespace="calico-system" Pod="calico-kube-controllers-6496b8966d-qmcxb" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:21.325182 containerd[1629]: 2025-02-14 01:11:21.280 [INFO][4721] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Namespace="calico-system" Pod="calico-kube-controllers-6496b8966d-qmcxb" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0", GenerateName:"calico-kube-controllers-6496b8966d-", Namespace:"calico-system", SelfLink:"", UID:"8293e9cd-d766-48c9-bffd-cc310990d080", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496b8966d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855", Pod:"calico-kube-controllers-6496b8966d-qmcxb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9ca936e326b", MAC:"42:ca:59:5c:ac:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:21.325182 containerd[1629]: 2025-02-14 01:11:21.311 [INFO][4721] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Namespace="calico-system" Pod="calico-kube-controllers-6496b8966d-qmcxb" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:21.356556 containerd[1629]: time="2025-02-14T01:11:21.355964373Z" level=info msg="StartContainer for \"4fc92ccdc8ad671074f8b7aee535a08afb827918f920c27d6c95c6aa590afcb5\" returns successfully" Feb 14 01:11:21.406672 containerd[1629]: time="2025-02-14T01:11:21.406407763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:11:21.406987 containerd[1629]: time="2025-02-14T01:11:21.406540645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:11:21.406987 containerd[1629]: time="2025-02-14T01:11:21.406579193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:21.406987 containerd[1629]: time="2025-02-14T01:11:21.406817732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:21.529326 systemd-networkd[1261]: cali66f7905b586: Gained IPv6LL Feb 14 01:11:21.600176 containerd[1629]: time="2025-02-14T01:11:21.600000162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496b8966d-qmcxb,Uid:8293e9cd-d766-48c9-bffd-cc310990d080,Namespace:calico-system,Attempt:1,} returns sandbox id \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\"" Feb 14 01:11:21.839980 kubelet[2938]: I0214 01:11:21.837640 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rjd7n" podStartSLOduration=40.837602369 podStartE2EDuration="40.837602369s" podCreationTimestamp="2025-02-14 01:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:11:21.834680308 +0000 UTC m=+55.702726831" watchObservedRunningTime="2025-02-14 01:11:21.837602369 +0000 UTC m=+55.705648890" Feb 14 01:11:22.041126 systemd-resolved[1516]: Under memory pressure, flushing caches. Feb 14 01:11:22.044396 systemd-journald[1165]: Under memory pressure, flushing caches. Feb 14 01:11:22.041215 systemd-resolved[1516]: Flushed all caches. Feb 14 01:11:22.104886 systemd-networkd[1261]: calic959f904f45: Gained IPv6LL Feb 14 01:11:22.489409 systemd-networkd[1261]: vxlan.calico: Gained IPv6LL Feb 14 01:11:23.064980 systemd-networkd[1261]: cali9ca936e326b: Gained IPv6LL Feb 14 01:11:24.356247 containerd[1629]: time="2025-02-14T01:11:24.354772108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:24.356247 containerd[1629]: time="2025-02-14T01:11:24.355863622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 14 01:11:24.357777 containerd[1629]: time="2025-02-14T01:11:24.357736871Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:24.365071 containerd[1629]: time="2025-02-14T01:11:24.364997417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:24.366543 containerd[1629]: time="2025-02-14T01:11:24.366482334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.152473571s" Feb 14 01:11:24.367290 containerd[1629]: time="2025-02-14T01:11:24.366692014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 14 01:11:24.374232 containerd[1629]: time="2025-02-14T01:11:24.374177524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 14 01:11:24.383428 containerd[1629]: time="2025-02-14T01:11:24.383123387Z" level=info msg="CreateContainer within sandbox \"399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 14 01:11:24.408923 containerd[1629]: time="2025-02-14T01:11:24.408298934Z" level=info msg="CreateContainer within sandbox \"399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"da72aa2cee114ccd27112c90f9c68e5b3954947300ec4ec4fe3b4f5f440852aa\"" Feb 14 01:11:24.412469 containerd[1629]: time="2025-02-14T01:11:24.412386265Z" level=info msg="StartContainer for \"da72aa2cee114ccd27112c90f9c68e5b3954947300ec4ec4fe3b4f5f440852aa\"" Feb 14 01:11:24.565871 containerd[1629]: time="2025-02-14T01:11:24.565820010Z" level=info msg="StartContainer for \"da72aa2cee114ccd27112c90f9c68e5b3954947300ec4ec4fe3b4f5f440852aa\" returns successfully" Feb 14 01:11:24.849227 kubelet[2938]: I0214 01:11:24.849109 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7654bfc7bc-qthrw" podStartSLOduration=31.678615985 podStartE2EDuration="35.849026761s" podCreationTimestamp="2025-02-14 01:10:49 +0000 UTC" firstStartedPulling="2025-02-14 01:11:20.202294214 +0000 UTC m=+54.070340721" lastFinishedPulling="2025-02-14 01:11:24.372704977 +0000 UTC m=+58.240751497" observedRunningTime="2025-02-14 01:11:24.847048951 +0000 UTC m=+58.715095488" watchObservedRunningTime="2025-02-14 01:11:24.849026761 +0000 UTC m=+58.717073278" Feb 14 01:11:25.832147 kubelet[2938]: I0214 01:11:25.831347 2938 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:11:26.306317 containerd[1629]: time="2025-02-14T01:11:26.306064522Z" level=info msg="StopPodSandbox for \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\"" Feb 14 01:11:26.443577 containerd[1629]: time="2025-02-14T01:11:26.443516747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:26.446041 containerd[1629]: time="2025-02-14T01:11:26.445981901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 14 01:11:26.448644 containerd[1629]: time="2025-02-14T01:11:26.448592400Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:26.480264 containerd[1629]: time="2025-02-14T01:11:26.480193811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:26.481024 containerd[1629]: time="2025-02-14T01:11:26.480984345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.106743781s" Feb 14 01:11:26.481138 containerd[1629]: time="2025-02-14T01:11:26.481042573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 14 01:11:26.489822 containerd[1629]: time="2025-02-14T01:11:26.489520719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 14 01:11:26.491477 containerd[1629]: time="2025-02-14T01:11:26.491429570Z" level=info msg="CreateContainer within sandbox \"ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 14 01:11:26.548978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550461669.mount: Deactivated successfully. Feb 14 01:11:26.570825 containerd[1629]: time="2025-02-14T01:11:26.569403767Z" level=info msg="CreateContainer within sandbox \"ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d2432c91c5d3d9c085241d30c5f0c54d1219456d143f97dabd7f8e70763da285\"" Feb 14 01:11:26.578364 containerd[1629]: time="2025-02-14T01:11:26.577383960Z" level=info msg="StartContainer for \"d2432c91c5d3d9c085241d30c5f0c54d1219456d143f97dabd7f8e70763da285\"" Feb 14 01:11:26.780502 containerd[1629]: time="2025-02-14T01:11:26.779794302Z" level=info msg="StartContainer for \"d2432c91c5d3d9c085241d30c5f0c54d1219456d143f97dabd7f8e70763da285\" returns successfully" Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.558 [WARNING][4966] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0", GenerateName:"calico-apiserver-7654bfc7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ec0bfed-3539-4181-a415-bf87b9386eac", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7654bfc7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e", Pod:"calico-apiserver-7654bfc7bc-qthrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a786b94afc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.573 [INFO][4966] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.573 [INFO][4966] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" iface="eth0" netns="" Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.576 [INFO][4966] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.576 [INFO][4966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.847 [INFO][4976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" HandleID="k8s-pod-network.d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.848 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.848 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.868 [WARNING][4976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" HandleID="k8s-pod-network.d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.868 [INFO][4976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" HandleID="k8s-pod-network.d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.872 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:26.889384 containerd[1629]: 2025-02-14 01:11:26.882 [INFO][4966] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:26.892204 containerd[1629]: time="2025-02-14T01:11:26.890584564Z" level=info msg="TearDown network for sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\" successfully" Feb 14 01:11:26.892204 containerd[1629]: time="2025-02-14T01:11:26.890638880Z" level=info msg="StopPodSandbox for \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\" returns successfully" Feb 14 01:11:26.892204 containerd[1629]: time="2025-02-14T01:11:26.891664594Z" level=info msg="RemovePodSandbox for \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\"" Feb 14 01:11:26.892204 containerd[1629]: time="2025-02-14T01:11:26.891706955Z" level=info msg="Forcibly stopping sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\"" Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:26.984 [WARNING][5034] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0", GenerateName:"calico-apiserver-7654bfc7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ec0bfed-3539-4181-a415-bf87b9386eac", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7654bfc7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"399f9db4bc3b0435ed396b2b73659b4fe8aa928d9f1b6e8ae08bc40b0c20374e", Pod:"calico-apiserver-7654bfc7bc-qthrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a786b94afc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:26.985 [INFO][5034] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:26.985 [INFO][5034] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" iface="eth0" netns="" Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:26.985 [INFO][5034] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:26.985 [INFO][5034] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:27.016 [INFO][5040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" HandleID="k8s-pod-network.d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:27.016 [INFO][5040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:27.017 [INFO][5040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:27.025 [WARNING][5040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" HandleID="k8s-pod-network.d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:27.027 [INFO][5040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" HandleID="k8s-pod-network.d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--qthrw-eth0" Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:27.030 [INFO][5040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:27.033205 containerd[1629]: 2025-02-14 01:11:27.031 [INFO][5034] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310" Feb 14 01:11:27.035402 containerd[1629]: time="2025-02-14T01:11:27.033256130Z" level=info msg="TearDown network for sandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\" successfully" Feb 14 01:11:27.044776 containerd[1629]: time="2025-02-14T01:11:27.044653766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:11:27.045104 containerd[1629]: time="2025-02-14T01:11:27.044817542Z" level=info msg="RemovePodSandbox \"d8d403c9d9f4005c6c6655a698cea0d765602cad712f0344f8676e6c8bf1d310\" returns successfully" Feb 14 01:11:27.046402 containerd[1629]: time="2025-02-14T01:11:27.045969020Z" level=info msg="StopPodSandbox for \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\"" Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.113 [WARNING][5059] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1c564fe-b330-4d68-9185-0946498c1f87", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb", Pod:"csi-node-driver-mpl8k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66f7905b586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.113 [INFO][5059] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.114 [INFO][5059] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" iface="eth0" netns="" Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.114 [INFO][5059] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.114 [INFO][5059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.147 [INFO][5065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" HandleID="k8s-pod-network.01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.147 [INFO][5065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.147 [INFO][5065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.157 [WARNING][5065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" HandleID="k8s-pod-network.01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.157 [INFO][5065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" HandleID="k8s-pod-network.01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.160 [INFO][5065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:27.165128 containerd[1629]: 2025-02-14 01:11:27.162 [INFO][5059] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:27.169222 containerd[1629]: time="2025-02-14T01:11:27.166007471Z" level=info msg="TearDown network for sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\" successfully" Feb 14 01:11:27.169222 containerd[1629]: time="2025-02-14T01:11:27.166050177Z" level=info msg="StopPodSandbox for \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\" returns successfully" Feb 14 01:11:27.169222 containerd[1629]: time="2025-02-14T01:11:27.167936205Z" level=info msg="RemovePodSandbox for \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\"" Feb 14 01:11:27.169222 containerd[1629]: time="2025-02-14T01:11:27.167976890Z" level=info msg="Forcibly stopping sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\"" Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.263 [WARNING][5084] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1c564fe-b330-4d68-9185-0946498c1f87", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb", Pod:"csi-node-driver-mpl8k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66f7905b586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.264 [INFO][5084] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.264 [INFO][5084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" iface="eth0" netns="" Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.264 [INFO][5084] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.264 [INFO][5084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.312 [INFO][5091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" HandleID="k8s-pod-network.01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.312 [INFO][5091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.312 [INFO][5091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.321 [WARNING][5091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" HandleID="k8s-pod-network.01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.321 [INFO][5091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" HandleID="k8s-pod-network.01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Workload="srv--krhnz.gb1.brightbox.com-k8s-csi--node--driver--mpl8k-eth0" Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.325 [INFO][5091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:27.329056 containerd[1629]: 2025-02-14 01:11:27.326 [INFO][5084] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819" Feb 14 01:11:27.331974 containerd[1629]: time="2025-02-14T01:11:27.329148121Z" level=info msg="TearDown network for sandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\" successfully" Feb 14 01:11:27.346695 containerd[1629]: time="2025-02-14T01:11:27.346619639Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:11:27.346843 containerd[1629]: time="2025-02-14T01:11:27.346762973Z" level=info msg="RemovePodSandbox \"01d51bf4c062f86ab5a28ba55c37fce9b90f455db5e97686a71a1285f6bb6819\" returns successfully" Feb 14 01:11:27.347953 containerd[1629]: time="2025-02-14T01:11:27.347845405Z" level=info msg="StopPodSandbox for \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\"" Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.409 [WARNING][5110] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b1f23b66-09c4-4269-b990-1ebe27f4a53b", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b", Pod:"coredns-7db6d8ff4d-rjd7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic959f904f45", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.410 [INFO][5110] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.410 [INFO][5110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" iface="eth0" netns="" Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.410 [INFO][5110] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.410 [INFO][5110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.440 [INFO][5117] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" HandleID="k8s-pod-network.45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.441 [INFO][5117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.441 [INFO][5117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.459 [WARNING][5117] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" HandleID="k8s-pod-network.45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.459 [INFO][5117] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" HandleID="k8s-pod-network.45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.463 [INFO][5117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:27.471991 containerd[1629]: 2025-02-14 01:11:27.465 [INFO][5110] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:27.471991 containerd[1629]: time="2025-02-14T01:11:27.469668790Z" level=info msg="TearDown network for sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\" successfully" Feb 14 01:11:27.471991 containerd[1629]: time="2025-02-14T01:11:27.469713117Z" level=info msg="StopPodSandbox for \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\" returns successfully" Feb 14 01:11:27.471991 containerd[1629]: time="2025-02-14T01:11:27.471311960Z" level=info msg="RemovePodSandbox for \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\"" Feb 14 01:11:27.471991 containerd[1629]: time="2025-02-14T01:11:27.471371396Z" level=info msg="Forcibly stopping sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\"" Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.538 [WARNING][5135] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b1f23b66-09c4-4269-b990-1ebe27f4a53b", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"3400fdffc76170f9e1dedbb5a49b667f0a9842f5b47178b24c966d7eb095658b", Pod:"coredns-7db6d8ff4d-rjd7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic959f904f45", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.538 [INFO][5135] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.538 [INFO][5135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" iface="eth0" netns="" Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.538 [INFO][5135] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.538 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.573 [INFO][5141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" HandleID="k8s-pod-network.45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.573 [INFO][5141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.573 [INFO][5141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.582 [WARNING][5141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" HandleID="k8s-pod-network.45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.582 [INFO][5141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" HandleID="k8s-pod-network.45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--rjd7n-eth0" Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.585 [INFO][5141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:27.589929 containerd[1629]: 2025-02-14 01:11:27.587 [INFO][5135] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416" Feb 14 01:11:27.591927 containerd[1629]: time="2025-02-14T01:11:27.589999285Z" level=info msg="TearDown network for sandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\" successfully" Feb 14 01:11:27.594402 containerd[1629]: time="2025-02-14T01:11:27.594367137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:11:27.594501 containerd[1629]: time="2025-02-14T01:11:27.594431228Z" level=info msg="RemovePodSandbox \"45972a233c44e217fb47d7b09ffe0d311963963b917f0f3091fcf83419fe9416\" returns successfully" Feb 14 01:11:27.595659 containerd[1629]: time="2025-02-14T01:11:27.595250225Z" level=info msg="StopPodSandbox for \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\"" Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.654 [WARNING][5159] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0", GenerateName:"calico-kube-controllers-6496b8966d-", Namespace:"calico-system", SelfLink:"", UID:"8293e9cd-d766-48c9-bffd-cc310990d080", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496b8966d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855", Pod:"calico-kube-controllers-6496b8966d-qmcxb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9ca936e326b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.656 [INFO][5159] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.656 [INFO][5159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" iface="eth0" netns="" Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.656 [INFO][5159] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.656 [INFO][5159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.685 [INFO][5165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" HandleID="k8s-pod-network.9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.685 [INFO][5165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.685 [INFO][5165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.698 [WARNING][5165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" HandleID="k8s-pod-network.9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.698 [INFO][5165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" HandleID="k8s-pod-network.9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.701 [INFO][5165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:27.704309 containerd[1629]: 2025-02-14 01:11:27.702 [INFO][5159] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:27.705338 containerd[1629]: time="2025-02-14T01:11:27.705282229Z" level=info msg="TearDown network for sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\" successfully" Feb 14 01:11:27.705437 containerd[1629]: time="2025-02-14T01:11:27.705339181Z" level=info msg="StopPodSandbox for \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\" returns successfully" Feb 14 01:11:27.706104 containerd[1629]: time="2025-02-14T01:11:27.706071427Z" level=info msg="RemovePodSandbox for \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\"" Feb 14 01:11:27.706199 containerd[1629]: time="2025-02-14T01:11:27.706128479Z" level=info msg="Forcibly stopping sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\"" Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.766 [WARNING][5183] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0", GenerateName:"calico-kube-controllers-6496b8966d-", Namespace:"calico-system", SelfLink:"", UID:"8293e9cd-d766-48c9-bffd-cc310990d080", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496b8966d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855", Pod:"calico-kube-controllers-6496b8966d-qmcxb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9ca936e326b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.766 [INFO][5183] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.766 [INFO][5183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" iface="eth0" netns="" Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.766 [INFO][5183] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.766 [INFO][5183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.796 [INFO][5189] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" HandleID="k8s-pod-network.9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.797 [INFO][5189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.797 [INFO][5189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.808 [WARNING][5189] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" HandleID="k8s-pod-network.9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.808 [INFO][5189] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" HandleID="k8s-pod-network.9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.810 [INFO][5189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:27.816121 containerd[1629]: 2025-02-14 01:11:27.813 [INFO][5183] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b" Feb 14 01:11:27.816121 containerd[1629]: time="2025-02-14T01:11:27.816051997Z" level=info msg="TearDown network for sandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\" successfully" Feb 14 01:11:27.823274 containerd[1629]: time="2025-02-14T01:11:27.823227128Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:11:27.823363 containerd[1629]: time="2025-02-14T01:11:27.823301092Z" level=info msg="RemovePodSandbox \"9b510bde7f8c3038ca886f1e12786b589072941a2493b2b5f4b9e4dcb55dcd8b\" returns successfully" Feb 14 01:11:27.825228 containerd[1629]: time="2025-02-14T01:11:27.824973934Z" level=info msg="StopPodSandbox for \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\"" Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.878 [WARNING][5207] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9cb84537-ce23-40cb-9b1d-10b46a87ae08", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0", Pod:"coredns-7db6d8ff4d-vgn94", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif43e7da0f02", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.878 [INFO][5207] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.878 [INFO][5207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" iface="eth0" netns="" Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.879 [INFO][5207] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.879 [INFO][5207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.909 [INFO][5213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" HandleID="k8s-pod-network.34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.909 [INFO][5213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.909 [INFO][5213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.917 [WARNING][5213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" HandleID="k8s-pod-network.34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.917 [INFO][5213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" HandleID="k8s-pod-network.34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.919 [INFO][5213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:27.922256 containerd[1629]: 2025-02-14 01:11:27.920 [INFO][5207] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:27.923361 containerd[1629]: time="2025-02-14T01:11:27.923175416Z" level=info msg="TearDown network for sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\" successfully" Feb 14 01:11:27.923361 containerd[1629]: time="2025-02-14T01:11:27.923260575Z" level=info msg="StopPodSandbox for \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\" returns successfully" Feb 14 01:11:27.924244 containerd[1629]: time="2025-02-14T01:11:27.924210051Z" level=info msg="RemovePodSandbox for \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\"" Feb 14 01:11:27.924383 containerd[1629]: time="2025-02-14T01:11:27.924256319Z" level=info msg="Forcibly stopping sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\"" Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:27.983 [WARNING][5231] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9cb84537-ce23-40cb-9b1d-10b46a87ae08", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"b5053692da71aa117eecf2c2a6a4e150f2c6af89056c700c9c5cd579cf1eb1a0", Pod:"coredns-7db6d8ff4d-vgn94", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif43e7da0f02", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:27.983 [INFO][5231] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:27.983 [INFO][5231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" iface="eth0" netns="" Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:27.984 [INFO][5231] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:27.984 [INFO][5231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:28.015 [INFO][5237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" HandleID="k8s-pod-network.34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:28.015 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:28.016 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:28.026 [WARNING][5237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" HandleID="k8s-pod-network.34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:28.026 [INFO][5237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" HandleID="k8s-pod-network.34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Workload="srv--krhnz.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--vgn94-eth0" Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:28.029 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:28.032725 containerd[1629]: 2025-02-14 01:11:28.030 [INFO][5231] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54" Feb 14 01:11:28.034365 containerd[1629]: time="2025-02-14T01:11:28.032770988Z" level=info msg="TearDown network for sandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\" successfully" Feb 14 01:11:28.036440 containerd[1629]: time="2025-02-14T01:11:28.036401302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:11:28.036628 containerd[1629]: time="2025-02-14T01:11:28.036493007Z" level=info msg="RemovePodSandbox \"34fe1b968a9dce5038eaaee6ce6e35660a60c9a923db09a3f9b2d381a7db7a54\" returns successfully" Feb 14 01:11:29.320341 containerd[1629]: time="2025-02-14T01:11:29.319815394Z" level=info msg="StopPodSandbox for \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\"" Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.440 [INFO][5261] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.446 [INFO][5261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" iface="eth0" netns="/var/run/netns/cni-d8a9113f-72ba-bd31-9380-0351070c08e7" Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.447 [INFO][5261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" iface="eth0" netns="/var/run/netns/cni-d8a9113f-72ba-bd31-9380-0351070c08e7" Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.447 [INFO][5261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" iface="eth0" netns="/var/run/netns/cni-d8a9113f-72ba-bd31-9380-0351070c08e7" Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.447 [INFO][5261] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.447 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.498 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" HandleID="k8s-pod-network.1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.499 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.500 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.514 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" HandleID="k8s-pod-network.1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.514 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" HandleID="k8s-pod-network.1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.518 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:29.527706 containerd[1629]: 2025-02-14 01:11:29.522 [INFO][5261] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:11:29.533471 containerd[1629]: time="2025-02-14T01:11:29.531409158Z" level=info msg="TearDown network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\" successfully" Feb 14 01:11:29.533471 containerd[1629]: time="2025-02-14T01:11:29.531478503Z" level=info msg="StopPodSandbox for \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\" returns successfully" Feb 14 01:11:29.538183 systemd[1]: run-netns-cni\x2dd8a9113f\x2d72ba\x2dbd31\x2d9380\x2d0351070c08e7.mount: Deactivated successfully. Feb 14 01:11:29.545816 containerd[1629]: time="2025-02-14T01:11:29.544908348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7654bfc7bc-jvfgq,Uid:7bc0044f-26fb-4412-b84f-e458618cf4b4,Namespace:calico-apiserver,Attempt:1,}" Feb 14 01:11:29.846498 systemd-networkd[1261]: cali4a647d7a030: Link UP Feb 14 01:11:29.851807 systemd-networkd[1261]: cali4a647d7a030: Gained carrier Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.653 [INFO][5273] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0 calico-apiserver-7654bfc7bc- calico-apiserver 7bc0044f-26fb-4412-b84f-e458618cf4b4 885 0 2025-02-14 01:10:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7654bfc7bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-krhnz.gb1.brightbox.com calico-apiserver-7654bfc7bc-jvfgq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4a647d7a030 [] []}} ContainerID="674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-jvfgq" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.655 [INFO][5273] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-jvfgq" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.741 [INFO][5284] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" HandleID="k8s-pod-network.674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.758 [INFO][5284] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" HandleID="k8s-pod-network.674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102fc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-krhnz.gb1.brightbox.com", "pod":"calico-apiserver-7654bfc7bc-jvfgq", "timestamp":"2025-02-14 01:11:29.741015391 +0000 UTC"}, Hostname:"srv-krhnz.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.758 [INFO][5284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.759 [INFO][5284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.759 [INFO][5284] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-krhnz.gb1.brightbox.com' Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.762 [INFO][5284] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.773 [INFO][5284] ipam/ipam.go 372: Looking up existing affinities for host host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.784 [INFO][5284] ipam/ipam.go 489: Trying affinity for 192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.788 [INFO][5284] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.792 [INFO][5284] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.792 [INFO][5284] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.796 [INFO][5284] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7 Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.805 [INFO][5284] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.826 [INFO][5284] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.198/26] block=192.168.126.192/26 handle="k8s-pod-network.674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.827 [INFO][5284] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.198/26] handle="k8s-pod-network.674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.827 [INFO][5284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:29.926252 containerd[1629]: 2025-02-14 01:11:29.827 [INFO][5284] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.198/26] IPv6=[] ContainerID="674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" HandleID="k8s-pod-network.674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:11:29.929278 containerd[1629]: 2025-02-14 01:11:29.834 [INFO][5273] cni-plugin/k8s.go 386: Populated endpoint ContainerID="674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-jvfgq" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0", GenerateName:"calico-apiserver-7654bfc7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bc0044f-26fb-4412-b84f-e458618cf4b4", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7654bfc7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7654bfc7bc-jvfgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a647d7a030", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:29.929278 containerd[1629]: 2025-02-14 01:11:29.836 [INFO][5273] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.198/32] ContainerID="674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-jvfgq" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:11:29.929278 containerd[1629]: 2025-02-14 01:11:29.836 [INFO][5273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a647d7a030 ContainerID="674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-jvfgq" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:11:29.929278 containerd[1629]: 2025-02-14 01:11:29.854 [INFO][5273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-jvfgq" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:11:29.929278 containerd[1629]: 2025-02-14 01:11:29.863 [INFO][5273] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-jvfgq" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0", GenerateName:"calico-apiserver-7654bfc7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bc0044f-26fb-4412-b84f-e458618cf4b4", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7654bfc7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7", Pod:"calico-apiserver-7654bfc7bc-jvfgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a647d7a030", MAC:"42:1e:fc:3b:13:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:29.929278 containerd[1629]: 2025-02-14 01:11:29.902 [INFO][5273] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7" Namespace="calico-apiserver" Pod="calico-apiserver-7654bfc7bc-jvfgq" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:11:30.017273 containerd[1629]: time="2025-02-14T01:11:30.017132803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:11:30.017602 containerd[1629]: time="2025-02-14T01:11:30.017556868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:11:30.017785 containerd[1629]: time="2025-02-14T01:11:30.017743189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:30.020227 containerd[1629]: time="2025-02-14T01:11:30.019940338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:30.046482 containerd[1629]: time="2025-02-14T01:11:30.045884656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:30.047555 containerd[1629]: time="2025-02-14T01:11:30.047489291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 14 01:11:30.048134 containerd[1629]: time="2025-02-14T01:11:30.048093332Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:30.055131 containerd[1629]: time="2025-02-14T01:11:30.055064824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:30.057151 containerd[1629]: time="2025-02-14T01:11:30.057099215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.567511023s" Feb 14 01:11:30.057251 containerd[1629]: time="2025-02-14T01:11:30.057165618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 14 01:11:30.065591 containerd[1629]: time="2025-02-14T01:11:30.065296963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 14 01:11:30.088362 containerd[1629]: time="2025-02-14T01:11:30.088277637Z" level=info msg="CreateContainer within sandbox \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 14 01:11:30.145496 containerd[1629]: time="2025-02-14T01:11:30.144926905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7654bfc7bc-jvfgq,Uid:7bc0044f-26fb-4412-b84f-e458618cf4b4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7\"" Feb 14 01:11:30.154012 containerd[1629]: time="2025-02-14T01:11:30.153967217Z" level=info msg="CreateContainer within sandbox \"674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 14 01:11:30.226506 containerd[1629]: time="2025-02-14T01:11:30.226408107Z" level=info msg="CreateContainer within sandbox \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487\"" Feb 14 01:11:30.227779 containerd[1629]: time="2025-02-14T01:11:30.227740350Z" level=info msg="StartContainer for \"ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487\"" Feb 14 01:11:30.236737 containerd[1629]: time="2025-02-14T01:11:30.236546135Z" level=info msg="CreateContainer within sandbox \"674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"25307921fa19a8eb224165d22832b9b49ae014b1dc7c3a622cfee814c8176c53\"" Feb 14 01:11:30.240470 containerd[1629]: time="2025-02-14T01:11:30.239760442Z" level=info msg="StartContainer for \"25307921fa19a8eb224165d22832b9b49ae014b1dc7c3a622cfee814c8176c53\"" Feb 14 01:11:30.362992 containerd[1629]: time="2025-02-14T01:11:30.362909275Z" level=info msg="StartContainer for \"ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487\" returns successfully" Feb 14 01:11:30.388754 containerd[1629]: time="2025-02-14T01:11:30.388608472Z" level=info msg="StartContainer for \"25307921fa19a8eb224165d22832b9b49ae014b1dc7c3a622cfee814c8176c53\" returns successfully" Feb 14 01:11:30.928471 kubelet[2938]: I0214 01:11:30.928151 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7654bfc7bc-jvfgq" podStartSLOduration=41.928076219 podStartE2EDuration="41.928076219s" podCreationTimestamp="2025-02-14 01:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:11:30.925891365 +0000 UTC m=+64.793937893" watchObservedRunningTime="2025-02-14 01:11:30.928076219 +0000 UTC m=+64.796122738" Feb 14 01:11:30.938650 systemd-networkd[1261]: cali4a647d7a030: Gained IPv6LL Feb 14 01:11:30.973924 kubelet[2938]: I0214 01:11:30.973100 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6496b8966d-qmcxb" podStartSLOduration=33.519302003 podStartE2EDuration="41.972938482s" podCreationTimestamp="2025-02-14 01:10:49 +0000 UTC" firstStartedPulling="2025-02-14 01:11:21.605881825 +0000 UTC m=+55.473928337" lastFinishedPulling="2025-02-14 01:11:30.059518284 +0000 UTC m=+63.927564816" observedRunningTime="2025-02-14 01:11:30.96768364 +0000 UTC m=+64.835730165" watchObservedRunningTime="2025-02-14 01:11:30.972938482 +0000 UTC m=+64.840984996" Feb 14 01:11:32.375718 containerd[1629]: time="2025-02-14T01:11:32.375342107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:32.377930 containerd[1629]: time="2025-02-14T01:11:32.377526742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 14 01:11:32.380791 containerd[1629]: time="2025-02-14T01:11:32.380663324Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:32.385278 containerd[1629]: time="2025-02-14T01:11:32.385181310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:11:32.387593 containerd[1629]: time="2025-02-14T01:11:32.387410563Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.322052107s" Feb 14 01:11:32.387593 containerd[1629]: time="2025-02-14T01:11:32.387517277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 14 01:11:32.393816 containerd[1629]: time="2025-02-14T01:11:32.393154470Z" level=info msg="CreateContainer within sandbox \"ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 14 01:11:32.465666 containerd[1629]: time="2025-02-14T01:11:32.465434703Z" level=info msg="CreateContainer within sandbox \"ec0892898f862a0c662e3241533090537b01ab30e48884f8239a460027a3a0fb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4aa56ba688811120dd9b5a368c2d25b1485d335748176ea6d20a67e401ff3a2b\"" Feb 14 01:11:32.471512 containerd[1629]: time="2025-02-14T01:11:32.467703550Z" level=info msg="StartContainer for \"4aa56ba688811120dd9b5a368c2d25b1485d335748176ea6d20a67e401ff3a2b\"" Feb 14 01:11:32.621823 containerd[1629]: time="2025-02-14T01:11:32.621366948Z" level=info msg="StartContainer for \"4aa56ba688811120dd9b5a368c2d25b1485d335748176ea6d20a67e401ff3a2b\" returns successfully" Feb 14 01:11:33.001947 kubelet[2938]: I0214 01:11:33.001860 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mpl8k" podStartSLOduration=31.892729191 podStartE2EDuration="44.00182679s" podCreationTimestamp="2025-02-14 01:10:49 +0000 UTC" firstStartedPulling="2025-02-14 01:11:20.280925179 +0000 UTC m=+54.148971686" lastFinishedPulling="2025-02-14 01:11:32.390022767 +0000 UTC m=+66.258069285" observedRunningTime="2025-02-14 01:11:32.999858357 +0000 UTC m=+66.867904886" watchObservedRunningTime="2025-02-14 01:11:33.00182679 +0000 UTC m=+66.869873330" Feb 14 01:11:33.837138 kubelet[2938]: I0214 01:11:33.835513 2938 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 14 01:11:33.837138 kubelet[2938]: I0214 01:11:33.837046 2938 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 14 01:11:41.546488 kubelet[2938]: I0214 01:11:41.545920 2938 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:11:45.187999 containerd[1629]: time="2025-02-14T01:11:45.187810774Z" level=info msg="StopContainer for \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\" with timeout 300 (s)" Feb 14 01:11:45.192896 containerd[1629]: time="2025-02-14T01:11:45.192759994Z" level=info msg="Stop container \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\" with signal terminated" Feb 14 01:11:45.515066 containerd[1629]: time="2025-02-14T01:11:45.512809086Z" level=info msg="StopContainer for \"ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487\" with timeout 30 (s)" Feb 14 01:11:45.524697 containerd[1629]: time="2025-02-14T01:11:45.520696167Z" level=info msg="Stop container \"ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487\" with signal terminated" Feb 14 01:11:45.712872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487-rootfs.mount: Deactivated successfully. Feb 14 01:11:45.788821 containerd[1629]: time="2025-02-14T01:11:45.771157789Z" level=info msg="shim disconnected" id=ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487 namespace=k8s.io Feb 14 01:11:45.800376 containerd[1629]: time="2025-02-14T01:11:45.800299363Z" level=warning msg="cleaning up after shim disconnected" id=ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487 namespace=k8s.io Feb 14 01:11:45.800376 containerd[1629]: time="2025-02-14T01:11:45.800372704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:11:45.840796 containerd[1629]: time="2025-02-14T01:11:45.840720196Z" level=info msg="StopContainer for \"ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487\" returns successfully" Feb 14 01:11:45.845511 containerd[1629]: time="2025-02-14T01:11:45.844816925Z" level=info msg="StopPodSandbox for \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\"" Feb 14 01:11:45.852480 containerd[1629]: time="2025-02-14T01:11:45.852259385Z" level=info msg="Container to stop \"ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 01:11:45.862770 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855-shm.mount: Deactivated successfully. Feb 14 01:11:45.958887 containerd[1629]: time="2025-02-14T01:11:45.958351877Z" level=info msg="shim disconnected" id=95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855 namespace=k8s.io Feb 14 01:11:45.958887 containerd[1629]: time="2025-02-14T01:11:45.958667155Z" level=warning msg="cleaning up after shim disconnected" id=95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855 namespace=k8s.io Feb 14 01:11:45.958887 containerd[1629]: time="2025-02-14T01:11:45.958686320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:11:45.960398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855-rootfs.mount: Deactivated successfully. Feb 14 01:11:45.989486 containerd[1629]: time="2025-02-14T01:11:45.989016218Z" level=info msg="StopContainer for \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\" with timeout 5 (s)" Feb 14 01:11:45.995648 containerd[1629]: time="2025-02-14T01:11:45.994950384Z" level=info msg="Stop container \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\" with signal terminated" Feb 14 01:11:46.173242 containerd[1629]: time="2025-02-14T01:11:46.172296812Z" level=info msg="shim disconnected" id=4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9 namespace=k8s.io Feb 14 01:11:46.173242 containerd[1629]: time="2025-02-14T01:11:46.172379921Z" level=warning msg="cleaning up after shim disconnected" id=4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9 namespace=k8s.io Feb 14 01:11:46.173242 containerd[1629]: time="2025-02-14T01:11:46.172397098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:11:46.176632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9-rootfs.mount: Deactivated successfully. Feb 14 01:11:46.218988 containerd[1629]: time="2025-02-14T01:11:46.218905845Z" level=warning msg="cleanup warnings time=\"2025-02-14T01:11:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 14 01:11:46.249704 containerd[1629]: time="2025-02-14T01:11:46.249130642Z" level=info msg="StopContainer for \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\" returns successfully" Feb 14 01:11:46.253723 containerd[1629]: time="2025-02-14T01:11:46.252868375Z" level=info msg="StopPodSandbox for \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\"" Feb 14 01:11:46.254522 containerd[1629]: time="2025-02-14T01:11:46.254406499Z" level=info msg="Container to stop \"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 01:11:46.254908 containerd[1629]: time="2025-02-14T01:11:46.254526154Z" level=info msg="Container to stop \"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 01:11:46.254908 containerd[1629]: time="2025-02-14T01:11:46.254556879Z" level=info msg="Container to stop \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 01:11:46.283098 systemd-networkd[1261]: cali9ca936e326b: Link DOWN Feb 14 01:11:46.283113 systemd-networkd[1261]: cali9ca936e326b: Lost carrier Feb 14 01:11:46.370575 containerd[1629]: time="2025-02-14T01:11:46.369752315Z" level=info msg="shim disconnected" id=44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72 namespace=k8s.io Feb 14 01:11:46.370575 containerd[1629]: time="2025-02-14T01:11:46.369825510Z" level=warning msg="cleaning up after shim disconnected" id=44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72 namespace=k8s.io Feb 14 01:11:46.370575 containerd[1629]: time="2025-02-14T01:11:46.369841695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:11:46.456792 containerd[1629]: time="2025-02-14T01:11:46.456577542Z" level=info msg="TearDown network for sandbox \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" successfully" Feb 14 01:11:46.456792 containerd[1629]: time="2025-02-14T01:11:46.456632921Z" level=info msg="StopPodSandbox for \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" returns successfully" Feb 14 01:11:46.533033 kubelet[2938]: I0214 01:11:46.532942 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vttjc\" (UniqueName: \"kubernetes.io/projected/2b748ae9-d5a4-45cf-b668-849d2794f614-kube-api-access-vttjc\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.535583 kubelet[2938]: I0214 01:11:46.533059 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-policysync\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.535583 kubelet[2938]: I0214 01:11:46.533116 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-net-dir\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.535583 kubelet[2938]: I0214 01:11:46.533147 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-var-run-calico\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.535583 kubelet[2938]: I0214 01:11:46.533185 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-xtables-lock\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.535583 kubelet[2938]: I0214 01:11:46.533212 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-bin-dir\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.535583 kubelet[2938]: I0214 01:11:46.533251 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-flexvol-driver-host\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.535886 kubelet[2938]: I0214 01:11:46.533282 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-log-dir\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.535886 kubelet[2938]: I0214 01:11:46.533305 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-var-lib-calico\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.535886 kubelet[2938]: I0214 01:11:46.533335 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-lib-modules\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.535886 kubelet[2938]: I0214 01:11:46.533372 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2b748ae9-d5a4-45cf-b668-849d2794f614-node-certs\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.535886 kubelet[2938]: I0214 01:11:46.533414 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b748ae9-d5a4-45cf-b668-849d2794f614-tigera-ca-bundle\") pod \"2b748ae9-d5a4-45cf-b668-849d2794f614\" (UID: \"2b748ae9-d5a4-45cf-b668-849d2794f614\") " Feb 14 01:11:46.571062 kubelet[2938]: I0214 01:11:46.569496 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-policysync" (OuterVolumeSpecName: "policysync") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 01:11:46.571062 kubelet[2938]: I0214 01:11:46.569610 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 01:11:46.574859 kubelet[2938]: I0214 01:11:46.570076 2938 topology_manager.go:215] "Topology Admit Handler" podUID="89675f59-426d-4c9c-807d-e3618670e87c" podNamespace="calico-system" podName="calico-node-6dbzf" Feb 14 01:11:46.576383 kubelet[2938]: I0214 01:11:46.575797 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b748ae9-d5a4-45cf-b668-849d2794f614-kube-api-access-vttjc" (OuterVolumeSpecName: "kube-api-access-vttjc") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "kube-api-access-vttjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 01:11:46.576383 kubelet[2938]: I0214 01:11:46.575879 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 01:11:46.576383 kubelet[2938]: I0214 01:11:46.575916 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 01:11:46.576383 kubelet[2938]: I0214 01:11:46.575965 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 01:11:46.576383 kubelet[2938]: I0214 01:11:46.575999 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 01:11:46.576947 kubelet[2938]: I0214 01:11:46.576031 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 01:11:46.576947 kubelet[2938]: I0214 01:11:46.576061 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 01:11:46.576947 kubelet[2938]: I0214 01:11:46.576094 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 01:11:46.580540 kubelet[2938]: E0214 01:11:46.580273 2938 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2b748ae9-d5a4-45cf-b668-849d2794f614" containerName="flexvol-driver" Feb 14 01:11:46.580540 kubelet[2938]: E0214 01:11:46.580422 2938 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2b748ae9-d5a4-45cf-b668-849d2794f614" containerName="install-cni" Feb 14 01:11:46.580540 kubelet[2938]: E0214 01:11:46.580438 2938 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2b748ae9-d5a4-45cf-b668-849d2794f614" containerName="calico-node" Feb 14 01:11:46.580748 kubelet[2938]: I0214 01:11:46.580639 2938 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b748ae9-d5a4-45cf-b668-849d2794f614" containerName="calico-node" Feb 14 01:11:46.586496 kubelet[2938]: I0214 01:11:46.586085 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b748ae9-d5a4-45cf-b668-849d2794f614-node-certs" (OuterVolumeSpecName: "node-certs") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 01:11:46.598671 kubelet[2938]: I0214 01:11:46.596788 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b748ae9-d5a4-45cf-b668-849d2794f614-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "2b748ae9-d5a4-45cf-b668-849d2794f614" (UID: "2b748ae9-d5a4-45cf-b668-849d2794f614"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 01:11:46.636875 systemd[1]: var-lib-kubelet-pods-2b748ae9\x2dd5a4\x2d45cf\x2db668\x2d849d2794f614-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Feb 14 01:11:46.637147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72-rootfs.mount: Deactivated successfully. Feb 14 01:11:46.637390 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72-shm.mount: Deactivated successfully. Feb 14 01:11:46.637623 systemd[1]: var-lib-kubelet-pods-2b748ae9\x2dd5a4\x2d45cf\x2db668\x2d849d2794f614-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvttjc.mount: Deactivated successfully. Feb 14 01:11:46.639627 systemd[1]: var-lib-kubelet-pods-2b748ae9\x2dd5a4\x2d45cf\x2db668\x2d849d2794f614-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 14 01:11:46.643355 kubelet[2938]: I0214 01:11:46.639656 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/89675f59-426d-4c9c-807d-e3618670e87c-policysync\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.643355 kubelet[2938]: I0214 01:11:46.639720 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89675f59-426d-4c9c-807d-e3618670e87c-tigera-ca-bundle\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.643355 kubelet[2938]: I0214 01:11:46.639768 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/89675f59-426d-4c9c-807d-e3618670e87c-var-run-calico\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.643355 kubelet[2938]: I0214 01:11:46.639799 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89675f59-426d-4c9c-807d-e3618670e87c-xtables-lock\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.643355 kubelet[2938]: I0214 01:11:46.639830 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/89675f59-426d-4c9c-807d-e3618670e87c-cni-net-dir\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.644904 kubelet[2938]: I0214 01:11:46.639869 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/89675f59-426d-4c9c-807d-e3618670e87c-cni-bin-dir\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.644904 kubelet[2938]: I0214 01:11:46.639899 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/89675f59-426d-4c9c-807d-e3618670e87c-var-lib-calico\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.644904 kubelet[2938]: I0214 01:11:46.639934 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/89675f59-426d-4c9c-807d-e3618670e87c-cni-log-dir\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.644904 kubelet[2938]: I0214 01:11:46.639967 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/89675f59-426d-4c9c-807d-e3618670e87c-node-certs\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.644904 kubelet[2938]: I0214 01:11:46.640011 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbg2h\" (UniqueName: \"kubernetes.io/projected/89675f59-426d-4c9c-807d-e3618670e87c-kube-api-access-dbg2h\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.647946 kubelet[2938]: I0214 01:11:46.640041 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/89675f59-426d-4c9c-807d-e3618670e87c-flexvol-driver-host\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.647946 kubelet[2938]: I0214 01:11:46.640080 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89675f59-426d-4c9c-807d-e3618670e87c-lib-modules\") pod \"calico-node-6dbzf\" (UID: \"89675f59-426d-4c9c-807d-e3618670e87c\") " pod="calico-system/calico-node-6dbzf" Feb 14 01:11:46.647946 kubelet[2938]: I0214 01:11:46.640120 2938 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-xtables-lock\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.647946 kubelet[2938]: I0214 01:11:46.640150 2938 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-bin-dir\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.647946 kubelet[2938]: I0214 01:11:46.640167 2938 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-flexvol-driver-host\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.647946 kubelet[2938]: I0214 01:11:46.640184 2938 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-log-dir\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.647946 kubelet[2938]: I0214 01:11:46.640199 2938 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-var-lib-calico\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.649626 kubelet[2938]: I0214 01:11:46.640214 2938 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-lib-modules\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.649626 kubelet[2938]: I0214 01:11:46.640243 2938 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2b748ae9-d5a4-45cf-b668-849d2794f614-node-certs\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.649626 kubelet[2938]: I0214 01:11:46.640259 2938 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b748ae9-d5a4-45cf-b668-849d2794f614-tigera-ca-bundle\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.649626 kubelet[2938]: I0214 01:11:46.640273 2938 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vttjc\" (UniqueName: \"kubernetes.io/projected/2b748ae9-d5a4-45cf-b668-849d2794f614-kube-api-access-vttjc\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.649626 kubelet[2938]: I0214 01:11:46.640289 2938 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-policysync\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.649626 kubelet[2938]: I0214 01:11:46.640304 2938 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-cni-net-dir\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.649626 kubelet[2938]: I0214 01:11:46.640318 2938 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2b748ae9-d5a4-45cf-b668-849d2794f614-var-run-calico\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.271 [INFO][5668] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.273 [INFO][5668] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" iface="eth0" netns="/var/run/netns/cni-c4eb54f5-df6c-fbe2-6502-3d3b42a2e660" Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.274 [INFO][5668] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" iface="eth0" netns="/var/run/netns/cni-c4eb54f5-df6c-fbe2-6502-3d3b42a2e660" Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.302 [INFO][5668] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" after=28.71616ms iface="eth0" netns="/var/run/netns/cni-c4eb54f5-df6c-fbe2-6502-3d3b42a2e660" Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.302 [INFO][5668] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.302 [INFO][5668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.492 [INFO][5715] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.492 [INFO][5715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.492 [INFO][5715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.648 [INFO][5715] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.648 [INFO][5715] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.652 [INFO][5715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:46.661182 containerd[1629]: 2025-02-14 01:11:46.657 [INFO][5668] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:11:46.671241 containerd[1629]: time="2025-02-14T01:11:46.661835527Z" level=info msg="TearDown network for sandbox \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\" successfully" Feb 14 01:11:46.671241 containerd[1629]: time="2025-02-14T01:11:46.661883995Z" level=info msg="StopPodSandbox for \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\" returns successfully" Feb 14 01:11:46.672388 systemd[1]: run-netns-cni\x2dc4eb54f5\x2ddf6c\x2dfbe2\x2d6502\x2d3d3b42a2e660.mount: Deactivated successfully. Feb 14 01:11:46.750630 kubelet[2938]: I0214 01:11:46.749761 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8293e9cd-d766-48c9-bffd-cc310990d080-tigera-ca-bundle\") pod \"8293e9cd-d766-48c9-bffd-cc310990d080\" (UID: \"8293e9cd-d766-48c9-bffd-cc310990d080\") " Feb 14 01:11:46.750630 kubelet[2938]: I0214 01:11:46.749860 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl8s7\" (UniqueName: \"kubernetes.io/projected/8293e9cd-d766-48c9-bffd-cc310990d080-kube-api-access-zl8s7\") pod \"8293e9cd-d766-48c9-bffd-cc310990d080\" (UID: \"8293e9cd-d766-48c9-bffd-cc310990d080\") " Feb 14 01:11:46.776548 kubelet[2938]: I0214 01:11:46.775958 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8293e9cd-d766-48c9-bffd-cc310990d080-kube-api-access-zl8s7" (OuterVolumeSpecName: "kube-api-access-zl8s7") pod "8293e9cd-d766-48c9-bffd-cc310990d080" (UID: "8293e9cd-d766-48c9-bffd-cc310990d080"). InnerVolumeSpecName "kube-api-access-zl8s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 01:11:46.777124 systemd[1]: var-lib-kubelet-pods-8293e9cd\x2dd766\x2d48c9\x2dbffd\x2dcc310990d080-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Feb 14 01:11:46.777370 systemd[1]: var-lib-kubelet-pods-8293e9cd\x2dd766\x2d48c9\x2dbffd\x2dcc310990d080-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzl8s7.mount: Deactivated successfully. Feb 14 01:11:46.790489 kubelet[2938]: I0214 01:11:46.789978 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8293e9cd-d766-48c9-bffd-cc310990d080-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "8293e9cd-d766-48c9-bffd-cc310990d080" (UID: "8293e9cd-d766-48c9-bffd-cc310990d080"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 01:11:46.851738 kubelet[2938]: I0214 01:11:46.851099 2938 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8293e9cd-d766-48c9-bffd-cc310990d080-tigera-ca-bundle\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.852059 kubelet[2938]: I0214 01:11:46.852020 2938 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zl8s7\" (UniqueName: \"kubernetes.io/projected/8293e9cd-d766-48c9-bffd-cc310990d080-kube-api-access-zl8s7\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:46.903929 containerd[1629]: time="2025-02-14T01:11:46.903865985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6dbzf,Uid:89675f59-426d-4c9c-807d-e3618670e87c,Namespace:calico-system,Attempt:0,}" Feb 14 01:11:46.979253 containerd[1629]: time="2025-02-14T01:11:46.976039121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:11:46.979253 containerd[1629]: time="2025-02-14T01:11:46.976125809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:11:46.979253 containerd[1629]: time="2025-02-14T01:11:46.976150009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:46.979253 containerd[1629]: time="2025-02-14T01:11:46.976299482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:46.991420 containerd[1629]: time="2025-02-14T01:11:46.991256843Z" level=info msg="shim disconnected" id=d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5 namespace=k8s.io Feb 14 01:11:46.991586 containerd[1629]: time="2025-02-14T01:11:46.991421383Z" level=warning msg="cleaning up after shim disconnected" id=d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5 namespace=k8s.io Feb 14 01:11:46.991586 containerd[1629]: time="2025-02-14T01:11:46.991440746Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:11:47.026073 kubelet[2938]: I0214 01:11:47.024282 2938 scope.go:117] "RemoveContainer" containerID="ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487" Feb 14 01:11:47.031483 containerd[1629]: time="2025-02-14T01:11:47.028909899Z" level=info msg="RemoveContainer for \"ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487\"" Feb 14 01:11:47.040151 containerd[1629]: time="2025-02-14T01:11:47.035967935Z" level=info msg="RemoveContainer for \"ad5c43d505f7edefa676ee95f99bbca9acce9d255a96ade8154f6386dc502487\" returns successfully" Feb 14 01:11:47.070094 kubelet[2938]: I0214 01:11:47.068559 2938 scope.go:117] "RemoveContainer" containerID="4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9" Feb 14 01:11:47.101490 containerd[1629]: time="2025-02-14T01:11:47.101412017Z" level=info msg="RemoveContainer for \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\"" Feb 14 01:11:47.148826 containerd[1629]: time="2025-02-14T01:11:47.148289533Z" level=info msg="RemoveContainer for \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\" returns successfully" Feb 14 01:11:47.152143 containerd[1629]: time="2025-02-14T01:11:47.148443063Z" level=info msg="StopContainer for \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\" returns successfully" Feb 14 01:11:47.153660 kubelet[2938]: I0214 01:11:47.152675 2938 scope.go:117] "RemoveContainer" containerID="b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e" Feb 14 01:11:47.156467 containerd[1629]: time="2025-02-14T01:11:47.154790500Z" level=info msg="StopPodSandbox for \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\"" Feb 14 01:11:47.156467 containerd[1629]: time="2025-02-14T01:11:47.154856419Z" level=info msg="Container to stop \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 01:11:47.180753 containerd[1629]: time="2025-02-14T01:11:47.180578586Z" level=info msg="RemoveContainer for \"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e\"" Feb 14 01:11:47.232190 containerd[1629]: time="2025-02-14T01:11:47.231665937Z" level=info msg="RemoveContainer for \"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e\" returns successfully" Feb 14 01:11:47.234073 kubelet[2938]: I0214 01:11:47.233570 2938 scope.go:117] "RemoveContainer" containerID="256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f" Feb 14 01:11:47.260794 containerd[1629]: time="2025-02-14T01:11:47.259617300Z" level=info msg="RemoveContainer for \"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f\"" Feb 14 01:11:47.282639 containerd[1629]: time="2025-02-14T01:11:47.281345972Z" level=info msg="RemoveContainer for \"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f\" returns successfully" Feb 14 01:11:47.285733 kubelet[2938]: I0214 01:11:47.284600 2938 scope.go:117] "RemoveContainer" containerID="4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9" Feb 14 01:11:47.285837 containerd[1629]: time="2025-02-14T01:11:47.285161299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6dbzf,Uid:89675f59-426d-4c9c-807d-e3618670e87c,Namespace:calico-system,Attempt:0,} returns sandbox id \"6edec9de241c6a8280e82907a7009a782b210a4a5ea061d67f1dfbf0ec1cd339\"" Feb 14 01:11:47.358328 containerd[1629]: time="2025-02-14T01:11:47.307168720Z" level=error msg="ContainerStatus for \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\": not found" Feb 14 01:11:47.364274 containerd[1629]: time="2025-02-14T01:11:47.321653358Z" level=info msg="CreateContainer within sandbox \"6edec9de241c6a8280e82907a7009a782b210a4a5ea061d67f1dfbf0ec1cd339\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 14 01:11:47.379860 kubelet[2938]: E0214 01:11:47.379118 2938 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\": not found" containerID="4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9" Feb 14 01:11:47.380767 kubelet[2938]: I0214 01:11:47.380545 2938 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9"} err="failed to get container status \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"4be6481d72d3775e9236f97a41f2015aa5385ea8a12b8094644e848b900611b9\": not found" Feb 14 01:11:47.380767 kubelet[2938]: I0214 01:11:47.380601 2938 scope.go:117] "RemoveContainer" containerID="b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e" Feb 14 01:11:47.384202 containerd[1629]: time="2025-02-14T01:11:47.383784503Z" level=error msg="ContainerStatus for \"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e\": not found" Feb 14 01:11:47.384202 containerd[1629]: time="2025-02-14T01:11:47.383934969Z" level=info msg="shim disconnected" id=02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655 namespace=k8s.io Feb 14 01:11:47.384202 containerd[1629]: time="2025-02-14T01:11:47.383983883Z" level=warning msg="cleaning up after shim disconnected" id=02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655 namespace=k8s.io Feb 14 01:11:47.384202 containerd[1629]: time="2025-02-14T01:11:47.383997838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:11:47.384506 kubelet[2938]: E0214 01:11:47.383974 2938 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e\": not found" containerID="b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e" Feb 14 01:11:47.384506 kubelet[2938]: I0214 01:11:47.384010 2938 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e"} err="failed to get container status \"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b99b693d56d54a98294474f47bb94f29a249e6cb198e47854a1c9f012dae335e\": not found" Feb 14 01:11:47.384506 kubelet[2938]: I0214 01:11:47.384035 2938 scope.go:117] "RemoveContainer" containerID="256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f" Feb 14 01:11:47.384695 containerd[1629]: time="2025-02-14T01:11:47.384397206Z" level=error msg="ContainerStatus for \"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f\": not found" Feb 14 01:11:47.384766 kubelet[2938]: E0214 01:11:47.384701 2938 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f\": not found" containerID="256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f" Feb 14 01:11:47.384766 kubelet[2938]: I0214 01:11:47.384738 2938 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f"} err="failed to get container status \"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f\": rpc error: code = NotFound desc = an error occurred when try to find container \"256d2afc9582fee17c9b021204054d842d95db0cfc67c048b17973781ac3094f\": not found" Feb 14 01:11:47.409169 containerd[1629]: time="2025-02-14T01:11:47.408540812Z" level=info msg="CreateContainer within sandbox \"6edec9de241c6a8280e82907a7009a782b210a4a5ea061d67f1dfbf0ec1cd339\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8e3223c61c5621be09947982586b97cc90af348a8d315384ddb9f0e1e356d683\"" Feb 14 01:11:47.410142 containerd[1629]: time="2025-02-14T01:11:47.410094702Z" level=info msg="StartContainer for \"8e3223c61c5621be09947982586b97cc90af348a8d315384ddb9f0e1e356d683\"" Feb 14 01:11:47.471341 containerd[1629]: time="2025-02-14T01:11:47.471282392Z" level=info msg="TearDown network for sandbox \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\" successfully" Feb 14 01:11:47.474640 containerd[1629]: time="2025-02-14T01:11:47.472778426Z" level=info msg="StopPodSandbox for \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\" returns successfully" Feb 14 01:11:47.567284 kubelet[2938]: I0214 01:11:47.566673 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf8hr\" (UniqueName: \"kubernetes.io/projected/1821f6ee-2bb4-4342-b070-23f3e14d282c-kube-api-access-xf8hr\") pod \"1821f6ee-2bb4-4342-b070-23f3e14d282c\" (UID: \"1821f6ee-2bb4-4342-b070-23f3e14d282c\") " Feb 14 01:11:47.568788 kubelet[2938]: I0214 01:11:47.568288 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1821f6ee-2bb4-4342-b070-23f3e14d282c-tigera-ca-bundle\") pod \"1821f6ee-2bb4-4342-b070-23f3e14d282c\" (UID: \"1821f6ee-2bb4-4342-b070-23f3e14d282c\") " Feb 14 01:11:47.570040 kubelet[2938]: I0214 01:11:47.569255 2938 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1821f6ee-2bb4-4342-b070-23f3e14d282c-typha-certs\") pod \"1821f6ee-2bb4-4342-b070-23f3e14d282c\" (UID: \"1821f6ee-2bb4-4342-b070-23f3e14d282c\") " Feb 14 01:11:47.574081 containerd[1629]: time="2025-02-14T01:11:47.574032951Z" level=info msg="StartContainer for \"8e3223c61c5621be09947982586b97cc90af348a8d315384ddb9f0e1e356d683\" returns successfully" Feb 14 01:11:47.581601 kubelet[2938]: I0214 01:11:47.579291 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1821f6ee-2bb4-4342-b070-23f3e14d282c-kube-api-access-xf8hr" (OuterVolumeSpecName: "kube-api-access-xf8hr") pod "1821f6ee-2bb4-4342-b070-23f3e14d282c" (UID: "1821f6ee-2bb4-4342-b070-23f3e14d282c"). InnerVolumeSpecName "kube-api-access-xf8hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 01:11:47.583554 kubelet[2938]: I0214 01:11:47.582999 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1821f6ee-2bb4-4342-b070-23f3e14d282c-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "1821f6ee-2bb4-4342-b070-23f3e14d282c" (UID: "1821f6ee-2bb4-4342-b070-23f3e14d282c"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 01:11:47.588018 kubelet[2938]: I0214 01:11:47.587634 2938 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1821f6ee-2bb4-4342-b070-23f3e14d282c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "1821f6ee-2bb4-4342-b070-23f3e14d282c" (UID: "1821f6ee-2bb4-4342-b070-23f3e14d282c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 01:11:47.648193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5-rootfs.mount: Deactivated successfully. Feb 14 01:11:47.648441 systemd[1]: var-lib-kubelet-pods-1821f6ee\x2d2bb4\x2d4342\x2db070\x2d23f3e14d282c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 14 01:11:47.651681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655-rootfs.mount: Deactivated successfully. Feb 14 01:11:47.651889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655-shm.mount: Deactivated successfully. Feb 14 01:11:47.652048 systemd[1]: var-lib-kubelet-pods-1821f6ee\x2d2bb4\x2d4342\x2db070\x2d23f3e14d282c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxf8hr.mount: Deactivated successfully. Feb 14 01:11:47.652205 systemd[1]: var-lib-kubelet-pods-1821f6ee\x2d2bb4\x2d4342\x2db070\x2d23f3e14d282c-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 14 01:11:47.670619 kubelet[2938]: I0214 01:11:47.670563 2938 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1821f6ee-2bb4-4342-b070-23f3e14d282c-tigera-ca-bundle\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:47.670619 kubelet[2938]: I0214 01:11:47.670616 2938 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1821f6ee-2bb4-4342-b070-23f3e14d282c-typha-certs\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:47.670865 kubelet[2938]: I0214 01:11:47.670635 2938 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xf8hr\" (UniqueName: \"kubernetes.io/projected/1821f6ee-2bb4-4342-b070-23f3e14d282c-kube-api-access-xf8hr\") on node \"srv-krhnz.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:11:47.841949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e3223c61c5621be09947982586b97cc90af348a8d315384ddb9f0e1e356d683-rootfs.mount: Deactivated successfully. Feb 14 01:11:47.843774 containerd[1629]: time="2025-02-14T01:11:47.842622879Z" level=info msg="shim disconnected" id=8e3223c61c5621be09947982586b97cc90af348a8d315384ddb9f0e1e356d683 namespace=k8s.io Feb 14 01:11:47.843774 containerd[1629]: time="2025-02-14T01:11:47.843525642Z" level=warning msg="cleaning up after shim disconnected" id=8e3223c61c5621be09947982586b97cc90af348a8d315384ddb9f0e1e356d683 namespace=k8s.io Feb 14 01:11:47.845434 containerd[1629]: time="2025-02-14T01:11:47.844220604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:11:48.078099 kubelet[2938]: I0214 01:11:48.077859 2938 scope.go:117] "RemoveContainer" containerID="d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5" Feb 14 01:11:48.121907 containerd[1629]: time="2025-02-14T01:11:48.121401159Z" level=info msg="RemoveContainer for \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\"" Feb 14 01:11:48.132962 containerd[1629]: time="2025-02-14T01:11:48.132903568Z" level=info msg="RemoveContainer for \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\" returns successfully" Feb 14 01:11:48.136350 kubelet[2938]: I0214 01:11:48.136314 2938 scope.go:117] "RemoveContainer" containerID="d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5" Feb 14 01:11:48.137789 containerd[1629]: time="2025-02-14T01:11:48.137721342Z" level=error msg="ContainerStatus for \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\": not found" Feb 14 01:11:48.140619 kubelet[2938]: E0214 01:11:48.138361 2938 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\": not found" containerID="d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5" Feb 14 01:11:48.140619 kubelet[2938]: I0214 01:11:48.138408 2938 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5"} err="failed to get container status \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d524a772d99f0d05144c5ec5743fc7cc44858c842ab588dc562eead75873c3f5\": not found" Feb 14 01:11:48.143126 containerd[1629]: time="2025-02-14T01:11:48.143000334Z" level=info msg="CreateContainer within sandbox \"6edec9de241c6a8280e82907a7009a782b210a4a5ea061d67f1dfbf0ec1cd339\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 14 01:11:48.166768 containerd[1629]: time="2025-02-14T01:11:48.166530949Z" level=info msg="CreateContainer within sandbox \"6edec9de241c6a8280e82907a7009a782b210a4a5ea061d67f1dfbf0ec1cd339\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b9080645baafb91f15e8974f93b8ba782448a937d2934db91b97aa350ecf8141\"" Feb 14 01:11:48.167610 containerd[1629]: time="2025-02-14T01:11:48.167482737Z" level=info msg="StartContainer for \"b9080645baafb91f15e8974f93b8ba782448a937d2934db91b97aa350ecf8141\"" Feb 14 01:11:48.340834 kubelet[2938]: I0214 01:11:48.340780 2938 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1821f6ee-2bb4-4342-b070-23f3e14d282c" path="/var/lib/kubelet/pods/1821f6ee-2bb4-4342-b070-23f3e14d282c/volumes" Feb 14 01:11:48.352569 kubelet[2938]: I0214 01:11:48.352528 2938 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b748ae9-d5a4-45cf-b668-849d2794f614" path="/var/lib/kubelet/pods/2b748ae9-d5a4-45cf-b668-849d2794f614/volumes" Feb 14 01:11:48.360300 kubelet[2938]: I0214 01:11:48.360258 2938 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8293e9cd-d766-48c9-bffd-cc310990d080" path="/var/lib/kubelet/pods/8293e9cd-d766-48c9-bffd-cc310990d080/volumes" Feb 14 01:11:48.369427 containerd[1629]: time="2025-02-14T01:11:48.369376181Z" level=info msg="StartContainer for \"b9080645baafb91f15e8974f93b8ba782448a937d2934db91b97aa350ecf8141\" returns successfully" Feb 14 01:11:48.774552 systemd[1]: Started sshd@9-10.230.17.130:22-147.75.109.163:40426.service - OpenSSH per-connection server daemon (147.75.109.163:40426). Feb 14 01:11:49.758183 sshd[5958]: Accepted publickey for core from 147.75.109.163 port 40426 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:11:49.762933 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:11:49.804556 systemd-logind[1617]: New session 12 of user core. Feb 14 01:11:49.808482 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 14 01:11:50.989727 systemd[1]: Started sshd@10-10.230.17.130:22-164.92.210.70:6103.service - OpenSSH per-connection server daemon (164.92.210.70:6103). Feb 14 01:11:51.156630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9080645baafb91f15e8974f93b8ba782448a937d2934db91b97aa350ecf8141-rootfs.mount: Deactivated successfully. Feb 14 01:11:51.166885 containerd[1629]: time="2025-02-14T01:11:51.166670824Z" level=info msg="shim disconnected" id=b9080645baafb91f15e8974f93b8ba782448a937d2934db91b97aa350ecf8141 namespace=k8s.io Feb 14 01:11:51.166885 containerd[1629]: time="2025-02-14T01:11:51.166825051Z" level=warning msg="cleaning up after shim disconnected" id=b9080645baafb91f15e8974f93b8ba782448a937d2934db91b97aa350ecf8141 namespace=k8s.io Feb 14 01:11:51.166885 containerd[1629]: time="2025-02-14T01:11:51.166850098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:11:51.186334 sshd[5958]: pam_unix(sshd:session): session closed for user core Feb 14 01:11:51.205785 systemd[1]: sshd@9-10.230.17.130:22-147.75.109.163:40426.service: Deactivated successfully. Feb 14 01:11:51.227996 systemd-logind[1617]: Session 12 logged out. Waiting for processes to exit. Feb 14 01:11:51.228427 systemd[1]: session-12.scope: Deactivated successfully. Feb 14 01:11:51.240494 systemd-logind[1617]: Removed session 12. Feb 14 01:11:51.278382 sshd[5973]: kex_protocol_error: type 20 seq 2 [preauth] Feb 14 01:11:51.278382 sshd[5973]: kex_protocol_error: type 30 seq 3 [preauth] Feb 14 01:11:52.281623 containerd[1629]: time="2025-02-14T01:11:52.280770888Z" level=info msg="CreateContainer within sandbox \"6edec9de241c6a8280e82907a7009a782b210a4a5ea061d67f1dfbf0ec1cd339\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 14 01:11:52.349909 containerd[1629]: time="2025-02-14T01:11:52.349362059Z" level=info msg="CreateContainer within sandbox \"6edec9de241c6a8280e82907a7009a782b210a4a5ea061d67f1dfbf0ec1cd339\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6e4758cd55dab2c918f7a81313101f9442dab9e069538b02201bfc896e235bd4\"" Feb 14 01:11:52.370488 containerd[1629]: time="2025-02-14T01:11:52.370318540Z" level=info msg="StartContainer for \"6e4758cd55dab2c918f7a81313101f9442dab9e069538b02201bfc896e235bd4\"" Feb 14 01:11:52.544016 containerd[1629]: time="2025-02-14T01:11:52.543867561Z" level=info msg="StartContainer for \"6e4758cd55dab2c918f7a81313101f9442dab9e069538b02201bfc896e235bd4\" returns successfully" Feb 14 01:11:52.830411 kubelet[2938]: I0214 01:11:52.830162 2938 topology_manager.go:215] "Topology Admit Handler" podUID="784a584e-d552-4955-94e5-728d7027677c" podNamespace="calico-system" podName="calico-kube-controllers-6848dbb8bb-cm9vt" Feb 14 01:11:52.851046 kubelet[2938]: E0214 01:11:52.850689 2938 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1821f6ee-2bb4-4342-b070-23f3e14d282c" containerName="calico-typha" Feb 14 01:11:52.851046 kubelet[2938]: E0214 01:11:52.850742 2938 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8293e9cd-d766-48c9-bffd-cc310990d080" containerName="calico-kube-controllers" Feb 14 01:11:52.855897 kubelet[2938]: I0214 01:11:52.855715 2938 memory_manager.go:354] "RemoveStaleState removing state" podUID="1821f6ee-2bb4-4342-b070-23f3e14d282c" containerName="calico-typha" Feb 14 01:11:52.855897 kubelet[2938]: I0214 01:11:52.855783 2938 memory_manager.go:354] "RemoveStaleState removing state" podUID="8293e9cd-d766-48c9-bffd-cc310990d080" containerName="calico-kube-controllers" Feb 14 01:11:52.955106 kubelet[2938]: I0214 01:11:52.954990 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/784a584e-d552-4955-94e5-728d7027677c-tigera-ca-bundle\") pod \"calico-kube-controllers-6848dbb8bb-cm9vt\" (UID: \"784a584e-d552-4955-94e5-728d7027677c\") " pod="calico-system/calico-kube-controllers-6848dbb8bb-cm9vt" Feb 14 01:11:52.955794 kubelet[2938]: I0214 01:11:52.955506 2938 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc8jc\" (UniqueName: \"kubernetes.io/projected/784a584e-d552-4955-94e5-728d7027677c-kube-api-access-dc8jc\") pod \"calico-kube-controllers-6848dbb8bb-cm9vt\" (UID: \"784a584e-d552-4955-94e5-728d7027677c\") " pod="calico-system/calico-kube-controllers-6848dbb8bb-cm9vt" Feb 14 01:11:52.970237 sshd[5973]: kex_protocol_error: type 20 seq 4 [preauth] Feb 14 01:11:52.970237 sshd[5973]: kex_protocol_error: type 30 seq 5 [preauth] Feb 14 01:11:53.212228 containerd[1629]: time="2025-02-14T01:11:53.212158538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6848dbb8bb-cm9vt,Uid:784a584e-d552-4955-94e5-728d7027677c,Namespace:calico-system,Attempt:0,}" Feb 14 01:11:53.253582 kubelet[2938]: I0214 01:11:53.243630 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6dbzf" podStartSLOduration=7.241724446 podStartE2EDuration="7.241724446s" podCreationTimestamp="2025-02-14 01:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:11:53.238212965 +0000 UTC m=+87.106259490" watchObservedRunningTime="2025-02-14 01:11:53.241724446 +0000 UTC m=+87.109770960" Feb 14 01:11:53.554676 systemd-networkd[1261]: cali72dd5e991d3: Link UP Feb 14 01:11:53.556645 systemd-networkd[1261]: cali72dd5e991d3: Gained carrier Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.387 [INFO][6078] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0 calico-kube-controllers-6848dbb8bb- calico-system 784a584e-d552-4955-94e5-728d7027677c 1133 0 2025-02-14 01:11:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6848dbb8bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-krhnz.gb1.brightbox.com calico-kube-controllers-6848dbb8bb-cm9vt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali72dd5e991d3 [] []}} ContainerID="c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" Namespace="calico-system" Pod="calico-kube-controllers-6848dbb8bb-cm9vt" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.388 [INFO][6078] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" Namespace="calico-system" Pod="calico-kube-controllers-6848dbb8bb-cm9vt" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.457 [INFO][6100] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" HandleID="k8s-pod-network.c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.480 [INFO][6100] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" HandleID="k8s-pod-network.c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec610), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-krhnz.gb1.brightbox.com", "pod":"calico-kube-controllers-6848dbb8bb-cm9vt", "timestamp":"2025-02-14 01:11:53.45766513 +0000 UTC"}, Hostname:"srv-krhnz.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.480 [INFO][6100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.480 [INFO][6100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.480 [INFO][6100] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-krhnz.gb1.brightbox.com' Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.483 [INFO][6100] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.494 [INFO][6100] ipam/ipam.go 372: Looking up existing affinities for host host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.506 [INFO][6100] ipam/ipam.go 489: Trying affinity for 192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.514 [INFO][6100] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.520 [INFO][6100] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.520 [INFO][6100] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.523 [INFO][6100] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4 Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.531 [INFO][6100] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.544 [INFO][6100] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.199/26] block=192.168.126.192/26 handle="k8s-pod-network.c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.544 [INFO][6100] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.199/26] handle="k8s-pod-network.c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" host="srv-krhnz.gb1.brightbox.com" Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.544 [INFO][6100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:11:53.578986 containerd[1629]: 2025-02-14 01:11:53.544 [INFO][6100] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.199/26] IPv6=[] ContainerID="c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" HandleID="k8s-pod-network.c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0" Feb 14 01:11:53.594422 containerd[1629]: 2025-02-14 01:11:53.548 [INFO][6078] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" Namespace="calico-system" Pod="calico-kube-controllers-6848dbb8bb-cm9vt" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0", GenerateName:"calico-kube-controllers-6848dbb8bb-", Namespace:"calico-system", SelfLink:"", UID:"784a584e-d552-4955-94e5-728d7027677c", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6848dbb8bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-6848dbb8bb-cm9vt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72dd5e991d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:53.594422 containerd[1629]: 2025-02-14 01:11:53.549 [INFO][6078] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.199/32] ContainerID="c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" Namespace="calico-system" Pod="calico-kube-controllers-6848dbb8bb-cm9vt" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0" Feb 14 01:11:53.594422 containerd[1629]: 2025-02-14 01:11:53.549 [INFO][6078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72dd5e991d3 ContainerID="c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" Namespace="calico-system" Pod="calico-kube-controllers-6848dbb8bb-cm9vt" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0" Feb 14 01:11:53.594422 containerd[1629]: 2025-02-14 01:11:53.557 [INFO][6078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" Namespace="calico-system" Pod="calico-kube-controllers-6848dbb8bb-cm9vt" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0" Feb 14 01:11:53.594422 containerd[1629]: 2025-02-14 01:11:53.558 [INFO][6078] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" Namespace="calico-system" Pod="calico-kube-controllers-6848dbb8bb-cm9vt" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0", GenerateName:"calico-kube-controllers-6848dbb8bb-", Namespace:"calico-system", SelfLink:"", UID:"784a584e-d552-4955-94e5-728d7027677c", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6848dbb8bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4", Pod:"calico-kube-controllers-6848dbb8bb-cm9vt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72dd5e991d3", MAC:"d6:59:24:11:a4:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:11:53.594422 containerd[1629]: 2025-02-14 01:11:53.572 [INFO][6078] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4" Namespace="calico-system" Pod="calico-kube-controllers-6848dbb8bb-cm9vt" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6848dbb8bb--cm9vt-eth0" Feb 14 01:11:53.658028 containerd[1629]: time="2025-02-14T01:11:53.657808252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:11:53.658836 containerd[1629]: time="2025-02-14T01:11:53.658058638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:11:53.658836 containerd[1629]: time="2025-02-14T01:11:53.658181797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:53.658836 containerd[1629]: time="2025-02-14T01:11:53.658553949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:11:53.719044 systemd[1]: run-containerd-runc-k8s.io-c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4-runc.wu1MBU.mount: Deactivated successfully. Feb 14 01:11:53.976797 systemd-resolved[1516]: Under memory pressure, flushing caches. Feb 14 01:11:53.979353 systemd-journald[1165]: Under memory pressure, flushing caches. Feb 14 01:11:53.976856 systemd-resolved[1516]: Flushed all caches. Feb 14 01:11:54.013030 containerd[1629]: time="2025-02-14T01:11:54.012935189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6848dbb8bb-cm9vt,Uid:784a584e-d552-4955-94e5-728d7027677c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4\"" Feb 14 01:11:54.027513 containerd[1629]: time="2025-02-14T01:11:54.027399075Z" level=info msg="CreateContainer within sandbox \"c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 14 01:11:54.065527 containerd[1629]: time="2025-02-14T01:11:54.065266363Z" level=info msg="CreateContainer within sandbox \"c886ec863abf91e7510f749eec8ecfccd7b264a1e8bb312268fcb14f1b9cc7c4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"392f04406e5f5cfbb1bc32044930ca324780e3f69a5ef9c23571d5e88fead5cd\"" Feb 14 01:11:54.067056 containerd[1629]: time="2025-02-14T01:11:54.067016875Z" level=info msg="StartContainer for \"392f04406e5f5cfbb1bc32044930ca324780e3f69a5ef9c23571d5e88fead5cd\"" Feb 14 01:11:54.201926 containerd[1629]: time="2025-02-14T01:11:54.201366412Z" level=info msg="StartContainer for \"392f04406e5f5cfbb1bc32044930ca324780e3f69a5ef9c23571d5e88fead5cd\" returns successfully" Feb 14 01:11:54.245007 kubelet[2938]: I0214 01:11:54.240795 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6848dbb8bb-cm9vt" podStartSLOduration=7.240770001 podStartE2EDuration="7.240770001s" podCreationTimestamp="2025-02-14 01:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:11:54.239906358 +0000 UTC m=+88.107952883" watchObservedRunningTime="2025-02-14 01:11:54.240770001 +0000 UTC m=+88.108816521" Feb 14 01:11:54.991896 sshd[5973]: kex_protocol_error: type 20 seq 6 [preauth] Feb 14 01:11:55.001054 sshd[5973]: kex_protocol_error: type 30 seq 7 [preauth] Feb 14 01:11:55.388907 systemd-networkd[1261]: cali72dd5e991d3: Gained IPv6LL Feb 14 01:11:55.393002 systemd[1]: run-containerd-runc-k8s.io-392f04406e5f5cfbb1bc32044930ca324780e3f69a5ef9c23571d5e88fead5cd-runc.8m6VoP.mount: Deactivated successfully. Feb 14 01:11:56.029804 systemd-journald[1165]: Under memory pressure, flushing caches. Feb 14 01:11:56.030978 systemd-resolved[1516]: Under memory pressure, flushing caches. Feb 14 01:11:56.031007 systemd-resolved[1516]: Flushed all caches. Feb 14 01:11:56.350336 systemd[1]: Started sshd@11-10.230.17.130:22-147.75.109.163:40900.service - OpenSSH per-connection server daemon (147.75.109.163:40900). Feb 14 01:11:57.388966 sshd[6353]: Accepted publickey for core from 147.75.109.163 port 40900 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:11:57.392186 sshd[6353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:11:57.402857 systemd-logind[1617]: New session 13 of user core. Feb 14 01:11:57.408972 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 14 01:11:58.485552 sshd[6353]: pam_unix(sshd:session): session closed for user core Feb 14 01:11:58.493916 systemd[1]: sshd@11-10.230.17.130:22-147.75.109.163:40900.service: Deactivated successfully. Feb 14 01:11:58.499819 systemd[1]: session-13.scope: Deactivated successfully. Feb 14 01:11:58.499983 systemd-logind[1617]: Session 13 logged out. Waiting for processes to exit. Feb 14 01:11:58.503248 systemd-logind[1617]: Removed session 13. Feb 14 01:12:03.645176 systemd[1]: Started sshd@12-10.230.17.130:22-147.75.109.163:52010.service - OpenSSH per-connection server daemon (147.75.109.163:52010). Feb 14 01:12:04.571626 sshd[6546]: Accepted publickey for core from 147.75.109.163 port 52010 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:04.574163 sshd[6546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:04.581960 systemd-logind[1617]: New session 14 of user core. Feb 14 01:12:04.590627 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 14 01:12:05.306919 sshd[6546]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:05.316632 systemd[1]: sshd@12-10.230.17.130:22-147.75.109.163:52010.service: Deactivated successfully. Feb 14 01:12:05.322170 systemd[1]: session-14.scope: Deactivated successfully. Feb 14 01:12:05.322718 systemd-logind[1617]: Session 14 logged out. Waiting for processes to exit. Feb 14 01:12:05.330099 systemd-logind[1617]: Removed session 14. Feb 14 01:12:10.460278 systemd[1]: Started sshd@13-10.230.17.130:22-147.75.109.163:44498.service - OpenSSH per-connection server daemon (147.75.109.163:44498). Feb 14 01:12:11.387975 sshd[6776]: Accepted publickey for core from 147.75.109.163 port 44498 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:11.391749 sshd[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:11.400597 systemd-logind[1617]: New session 15 of user core. Feb 14 01:12:11.405898 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 14 01:12:12.210849 sshd[6776]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:12.216052 systemd-logind[1617]: Session 15 logged out. Waiting for processes to exit. Feb 14 01:12:12.217939 systemd[1]: sshd@13-10.230.17.130:22-147.75.109.163:44498.service: Deactivated successfully. Feb 14 01:12:12.221986 systemd[1]: session-15.scope: Deactivated successfully. Feb 14 01:12:12.224512 systemd-logind[1617]: Removed session 15. Feb 14 01:12:12.362874 systemd[1]: Started sshd@14-10.230.17.130:22-147.75.109.163:44504.service - OpenSSH per-connection server daemon (147.75.109.163:44504). Feb 14 01:12:13.242034 sshd[6793]: Accepted publickey for core from 147.75.109.163 port 44504 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:13.244430 sshd[6793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:13.252554 systemd-logind[1617]: New session 16 of user core. Feb 14 01:12:13.258183 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 14 01:12:14.138703 sshd[6793]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:14.146006 systemd[1]: sshd@14-10.230.17.130:22-147.75.109.163:44504.service: Deactivated successfully. Feb 14 01:12:14.151842 systemd[1]: session-16.scope: Deactivated successfully. Feb 14 01:12:14.153846 systemd-logind[1617]: Session 16 logged out. Waiting for processes to exit. Feb 14 01:12:14.156038 systemd-logind[1617]: Removed session 16. Feb 14 01:12:14.290911 systemd[1]: Started sshd@15-10.230.17.130:22-147.75.109.163:44510.service - OpenSSH per-connection server daemon (147.75.109.163:44510). Feb 14 01:12:15.184290 sshd[6805]: Accepted publickey for core from 147.75.109.163 port 44510 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:15.186560 sshd[6805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:15.194245 systemd-logind[1617]: New session 17 of user core. Feb 14 01:12:15.202959 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 14 01:12:15.965712 sshd[6805]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:15.970664 systemd[1]: sshd@15-10.230.17.130:22-147.75.109.163:44510.service: Deactivated successfully. Feb 14 01:12:15.976914 systemd-logind[1617]: Session 17 logged out. Waiting for processes to exit. Feb 14 01:12:15.977565 systemd[1]: session-17.scope: Deactivated successfully. Feb 14 01:12:15.978982 systemd-logind[1617]: Removed session 17. Feb 14 01:12:21.123883 systemd[1]: Started sshd@16-10.230.17.130:22-147.75.109.163:41424.service - OpenSSH per-connection server daemon (147.75.109.163:41424). Feb 14 01:12:22.059542 sshd[6865]: Accepted publickey for core from 147.75.109.163 port 41424 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:22.065410 sshd[6865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:22.085506 systemd-logind[1617]: New session 18 of user core. Feb 14 01:12:22.092685 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 14 01:12:22.964405 sshd[5973]: Connection reset by 164.92.210.70 port 6103 [preauth] Feb 14 01:12:22.966206 systemd[1]: sshd@10-10.230.17.130:22-164.92.210.70:6103.service: Deactivated successfully. Feb 14 01:12:23.033961 sshd[6865]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:23.046493 systemd[1]: sshd@16-10.230.17.130:22-147.75.109.163:41424.service: Deactivated successfully. Feb 14 01:12:23.055779 systemd-logind[1617]: Session 18 logged out. Waiting for processes to exit. Feb 14 01:12:23.056959 systemd[1]: session-18.scope: Deactivated successfully. Feb 14 01:12:23.059404 systemd-logind[1617]: Removed session 18. Feb 14 01:12:28.083950 containerd[1629]: time="2025-02-14T01:12:28.059786289Z" level=info msg="StopPodSandbox for \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\"" Feb 14 01:12:28.083950 containerd[1629]: time="2025-02-14T01:12:28.083937086Z" level=info msg="TearDown network for sandbox \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\" successfully" Feb 14 01:12:28.083950 containerd[1629]: time="2025-02-14T01:12:28.083964639Z" level=info msg="StopPodSandbox for \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\" returns successfully" Feb 14 01:12:28.097414 containerd[1629]: time="2025-02-14T01:12:28.096981969Z" level=info msg="RemovePodSandbox for \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\"" Feb 14 01:12:28.106215 containerd[1629]: time="2025-02-14T01:12:28.106102758Z" level=info msg="Forcibly stopping sandbox \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\"" Feb 14 01:12:28.106788 containerd[1629]: time="2025-02-14T01:12:28.106612728Z" level=info msg="TearDown network for sandbox \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\" successfully" Feb 14 01:12:28.131716 containerd[1629]: time="2025-02-14T01:12:28.131428611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:12:28.132388 containerd[1629]: time="2025-02-14T01:12:28.132206895Z" level=info msg="RemovePodSandbox \"02218a4daf8e6d014c4effd04bbeb47c198c88c447f2e9c7cb923f365d2db655\" returns successfully" Feb 14 01:12:28.134158 containerd[1629]: time="2025-02-14T01:12:28.134097800Z" level=info msg="StopPodSandbox for \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\"" Feb 14 01:12:28.190925 systemd[1]: Started sshd@17-10.230.17.130:22-147.75.109.163:41434.service - OpenSSH per-connection server daemon (147.75.109.163:41434). Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.420 [WARNING][6916] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.420 [INFO][6916] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.420 [INFO][6916] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" iface="eth0" netns="" Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.421 [INFO][6916] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.421 [INFO][6916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.492 [INFO][6923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.497 [INFO][6923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.498 [INFO][6923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.518 [WARNING][6923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.518 [INFO][6923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.522 [INFO][6923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:12:28.527260 containerd[1629]: 2025-02-14 01:12:28.525 [INFO][6916] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:12:28.528129 containerd[1629]: time="2025-02-14T01:12:28.527867098Z" level=info msg="TearDown network for sandbox \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\" successfully" Feb 14 01:12:28.528129 containerd[1629]: time="2025-02-14T01:12:28.527908635Z" level=info msg="StopPodSandbox for \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\" returns successfully" Feb 14 01:12:28.528861 containerd[1629]: time="2025-02-14T01:12:28.528799197Z" level=info msg="RemovePodSandbox for \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\"" Feb 14 01:12:28.529016 containerd[1629]: time="2025-02-14T01:12:28.528991033Z" level=info msg="Forcibly stopping sandbox \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\"" Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.610 [WARNING][6942] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" WorkloadEndpoint="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.611 [INFO][6942] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.611 [INFO][6942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" iface="eth0" netns="" Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.611 [INFO][6942] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.611 [INFO][6942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.647 [INFO][6948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.648 [INFO][6948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.648 [INFO][6948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.660 [WARNING][6948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.660 [INFO][6948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" HandleID="k8s-pod-network.95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--kube--controllers--6496b8966d--qmcxb-eth0" Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.664 [INFO][6948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:12:28.669555 containerd[1629]: 2025-02-14 01:12:28.667 [INFO][6942] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855" Feb 14 01:12:28.673205 containerd[1629]: time="2025-02-14T01:12:28.669896281Z" level=info msg="TearDown network for sandbox \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\" successfully" Feb 14 01:12:28.676153 containerd[1629]: time="2025-02-14T01:12:28.676105463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:12:28.676387 containerd[1629]: time="2025-02-14T01:12:28.676358867Z" level=info msg="RemovePodSandbox \"95cfae88cafffc25bc8e2a7ad48d38a8384f73cf1b70d534b11686425d239855\" returns successfully" Feb 14 01:12:28.677707 containerd[1629]: time="2025-02-14T01:12:28.677666339Z" level=info msg="StopPodSandbox for \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\"" Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.737 [WARNING][6966] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0", GenerateName:"calico-apiserver-7654bfc7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bc0044f-26fb-4412-b84f-e458618cf4b4", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7654bfc7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7", Pod:"calico-apiserver-7654bfc7bc-jvfgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a647d7a030", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.737 [INFO][6966] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.737 [INFO][6966] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" iface="eth0" netns="" Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.737 [INFO][6966] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.737 [INFO][6966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.766 [INFO][6972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" HandleID="k8s-pod-network.1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.766 [INFO][6972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.766 [INFO][6972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.782 [WARNING][6972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" HandleID="k8s-pod-network.1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.782 [INFO][6972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" HandleID="k8s-pod-network.1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.788 [INFO][6972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:12:28.792661 containerd[1629]: 2025-02-14 01:12:28.790 [INFO][6966] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:12:28.792661 containerd[1629]: time="2025-02-14T01:12:28.792612514Z" level=info msg="TearDown network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\" successfully" Feb 14 01:12:28.792661 containerd[1629]: time="2025-02-14T01:12:28.792658356Z" level=info msg="StopPodSandbox for \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\" returns successfully" Feb 14 01:12:28.796131 containerd[1629]: time="2025-02-14T01:12:28.795606483Z" level=info msg="RemovePodSandbox for \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\"" Feb 14 01:12:28.796131 containerd[1629]: time="2025-02-14T01:12:28.795701843Z" level=info msg="Forcibly stopping sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\"" Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.872 [WARNING][6990] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0", GenerateName:"calico-apiserver-7654bfc7bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bc0044f-26fb-4412-b84f-e458618cf4b4", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7654bfc7bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-krhnz.gb1.brightbox.com", ContainerID:"674540eede0730e7f999dec72fdbe7b377b6d96b72b0a82d7b971962e095cbd7", Pod:"calico-apiserver-7654bfc7bc-jvfgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a647d7a030", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.872 [INFO][6990] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.872 [INFO][6990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" iface="eth0" netns="" Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.872 [INFO][6990] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.872 [INFO][6990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.905 [INFO][6996] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" HandleID="k8s-pod-network.1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.905 [INFO][6996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.905 [INFO][6996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.917 [WARNING][6996] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" HandleID="k8s-pod-network.1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.917 [INFO][6996] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" HandleID="k8s-pod-network.1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Workload="srv--krhnz.gb1.brightbox.com-k8s-calico--apiserver--7654bfc7bc--jvfgq-eth0" Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.922 [INFO][6996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:12:28.925919 containerd[1629]: 2025-02-14 01:12:28.924 [INFO][6990] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee" Feb 14 01:12:28.927298 containerd[1629]: time="2025-02-14T01:12:28.925974451Z" level=info msg="TearDown network for sandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\" successfully" Feb 14 01:12:28.929595 containerd[1629]: time="2025-02-14T01:12:28.929557106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:12:28.929769 containerd[1629]: time="2025-02-14T01:12:28.929644975Z" level=info msg="RemovePodSandbox \"1d480dd32e5d912f5cf56c0867a8565a86a76966c3775e0d5c20c99feed849ee\" returns successfully" Feb 14 01:12:28.930997 containerd[1629]: time="2025-02-14T01:12:28.930838581Z" level=info msg="StopPodSandbox for \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\"" Feb 14 01:12:28.931386 containerd[1629]: time="2025-02-14T01:12:28.931181744Z" level=info msg="TearDown network for sandbox \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" successfully" Feb 14 01:12:28.931386 containerd[1629]: time="2025-02-14T01:12:28.931244272Z" level=info msg="StopPodSandbox for \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" returns successfully" Feb 14 01:12:28.932911 containerd[1629]: time="2025-02-14T01:12:28.932791314Z" level=info msg="RemovePodSandbox for \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\"" Feb 14 01:12:28.932911 containerd[1629]: time="2025-02-14T01:12:28.932878538Z" level=info msg="Forcibly stopping sandbox \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\"" Feb 14 01:12:28.933432 containerd[1629]: time="2025-02-14T01:12:28.933168751Z" level=info msg="TearDown network for sandbox \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" successfully" Feb 14 01:12:28.939182 containerd[1629]: time="2025-02-14T01:12:28.939078324Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:12:28.939182 containerd[1629]: time="2025-02-14T01:12:28.939138269Z" level=info msg="RemovePodSandbox \"44853c547a8e31ba6b72d1a5c7335f1a3069492d8925f807623dc33cdfd98f72\" returns successfully" Feb 14 01:12:29.183723 sshd[6912]: Accepted publickey for core from 147.75.109.163 port 41434 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:29.196692 sshd[6912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:29.205917 systemd-logind[1617]: New session 19 of user core. Feb 14 01:12:29.216046 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 14 01:12:30.015077 systemd-journald[1165]: Under memory pressure, flushing caches. Feb 14 01:12:30.008635 systemd-resolved[1516]: Under memory pressure, flushing caches. Feb 14 01:12:30.008651 systemd-resolved[1516]: Flushed all caches. Feb 14 01:12:30.381422 sshd[6912]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:30.387616 systemd[1]: sshd@17-10.230.17.130:22-147.75.109.163:41434.service: Deactivated successfully. Feb 14 01:12:30.392359 systemd-logind[1617]: Session 19 logged out. Waiting for processes to exit. Feb 14 01:12:30.393089 systemd[1]: session-19.scope: Deactivated successfully. Feb 14 01:12:30.397670 systemd-logind[1617]: Removed session 19. Feb 14 01:12:32.062480 systemd-journald[1165]: Under memory pressure, flushing caches. Feb 14 01:12:32.057176 systemd-resolved[1516]: Under memory pressure, flushing caches. Feb 14 01:12:32.057199 systemd-resolved[1516]: Flushed all caches. Feb 14 01:12:35.536832 systemd[1]: Started sshd@18-10.230.17.130:22-147.75.109.163:55634.service - OpenSSH per-connection server daemon (147.75.109.163:55634). Feb 14 01:12:36.427643 sshd[7022]: Accepted publickey for core from 147.75.109.163 port 55634 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:36.430024 sshd[7022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:36.439751 systemd-logind[1617]: New session 20 of user core. Feb 14 01:12:36.448953 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 14 01:12:37.202843 sshd[7022]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:37.206516 systemd[1]: sshd@18-10.230.17.130:22-147.75.109.163:55634.service: Deactivated successfully. Feb 14 01:12:37.212226 systemd-logind[1617]: Session 20 logged out. Waiting for processes to exit. Feb 14 01:12:37.212863 systemd[1]: session-20.scope: Deactivated successfully. Feb 14 01:12:37.215440 systemd-logind[1617]: Removed session 20. Feb 14 01:12:42.355416 systemd[1]: Started sshd@19-10.230.17.130:22-147.75.109.163:49150.service - OpenSSH per-connection server daemon (147.75.109.163:49150). Feb 14 01:12:43.262397 sshd[7038]: Accepted publickey for core from 147.75.109.163 port 49150 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:43.264968 sshd[7038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:43.273061 systemd-logind[1617]: New session 21 of user core. Feb 14 01:12:43.277042 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 14 01:12:44.183758 sshd[7038]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:44.189067 systemd[1]: sshd@19-10.230.17.130:22-147.75.109.163:49150.service: Deactivated successfully. Feb 14 01:12:44.192726 systemd[1]: session-21.scope: Deactivated successfully. Feb 14 01:12:44.194232 systemd-logind[1617]: Session 21 logged out. Waiting for processes to exit. Feb 14 01:12:44.196752 systemd-logind[1617]: Removed session 21. Feb 14 01:12:44.337039 systemd[1]: Started sshd@20-10.230.17.130:22-147.75.109.163:49162.service - OpenSSH per-connection server daemon (147.75.109.163:49162). Feb 14 01:12:45.253541 sshd[7052]: Accepted publickey for core from 147.75.109.163 port 49162 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:45.255729 sshd[7052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:45.264817 systemd-logind[1617]: New session 22 of user core. Feb 14 01:12:45.268922 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 14 01:12:46.278643 sshd[7052]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:46.294495 systemd[1]: sshd@20-10.230.17.130:22-147.75.109.163:49162.service: Deactivated successfully. Feb 14 01:12:46.307163 systemd-logind[1617]: Session 22 logged out. Waiting for processes to exit. Feb 14 01:12:46.308253 systemd[1]: session-22.scope: Deactivated successfully. Feb 14 01:12:46.312944 systemd-logind[1617]: Removed session 22. Feb 14 01:12:46.427043 systemd[1]: Started sshd@21-10.230.17.130:22-147.75.109.163:49168.service - OpenSSH per-connection server daemon (147.75.109.163:49168). Feb 14 01:12:47.342209 sshd[7064]: Accepted publickey for core from 147.75.109.163 port 49168 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:47.346803 sshd[7064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:47.365427 systemd-logind[1617]: New session 23 of user core. Feb 14 01:12:47.373026 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 14 01:12:51.295158 sshd[7064]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:51.312551 systemd[1]: sshd@21-10.230.17.130:22-147.75.109.163:49168.service: Deactivated successfully. Feb 14 01:12:51.314482 systemd-logind[1617]: Session 23 logged out. Waiting for processes to exit. Feb 14 01:12:51.318381 systemd[1]: session-23.scope: Deactivated successfully. Feb 14 01:12:51.323766 systemd-logind[1617]: Removed session 23. Feb 14 01:12:51.437959 systemd[1]: Started sshd@22-10.230.17.130:22-147.75.109.163:57106.service - OpenSSH per-connection server daemon (147.75.109.163:57106). Feb 14 01:12:52.029859 systemd-journald[1165]: Under memory pressure, flushing caches. Feb 14 01:12:52.024645 systemd-resolved[1516]: Under memory pressure, flushing caches. Feb 14 01:12:52.024659 systemd-resolved[1516]: Flushed all caches. Feb 14 01:12:52.348133 sshd[7114]: Accepted publickey for core from 147.75.109.163 port 57106 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:52.350602 sshd[7114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:52.358895 systemd-logind[1617]: New session 24 of user core. Feb 14 01:12:52.373161 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 14 01:12:53.460740 systemd[1]: run-containerd-runc-k8s.io-392f04406e5f5cfbb1bc32044930ca324780e3f69a5ef9c23571d5e88fead5cd-runc.97hYvh.mount: Deactivated successfully. Feb 14 01:12:54.077247 systemd-journald[1165]: Under memory pressure, flushing caches. Feb 14 01:12:54.076000 systemd-resolved[1516]: Under memory pressure, flushing caches. Feb 14 01:12:54.076024 systemd-resolved[1516]: Flushed all caches. Feb 14 01:12:54.231118 sshd[7114]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:54.242747 systemd[1]: sshd@22-10.230.17.130:22-147.75.109.163:57106.service: Deactivated successfully. Feb 14 01:12:54.250013 systemd[1]: session-24.scope: Deactivated successfully. Feb 14 01:12:54.250113 systemd-logind[1617]: Session 24 logged out. Waiting for processes to exit. Feb 14 01:12:54.260104 systemd-logind[1617]: Removed session 24. Feb 14 01:12:54.392138 systemd[1]: Started sshd@23-10.230.17.130:22-147.75.109.163:57118.service - OpenSSH per-connection server daemon (147.75.109.163:57118). Feb 14 01:12:55.386124 sshd[7166]: Accepted publickey for core from 147.75.109.163 port 57118 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:12:55.387879 sshd[7166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:12:55.399523 systemd-logind[1617]: New session 25 of user core. Feb 14 01:12:55.403905 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 14 01:12:56.122884 sshd[7166]: pam_unix(sshd:session): session closed for user core Feb 14 01:12:56.128032 systemd[1]: sshd@23-10.230.17.130:22-147.75.109.163:57118.service: Deactivated successfully. Feb 14 01:12:56.132689 systemd[1]: session-25.scope: Deactivated successfully. Feb 14 01:12:56.134145 systemd-logind[1617]: Session 25 logged out. Waiting for processes to exit. Feb 14 01:12:56.136093 systemd-logind[1617]: Removed session 25. Feb 14 01:13:01.274872 systemd[1]: Started sshd@24-10.230.17.130:22-147.75.109.163:38296.service - OpenSSH per-connection server daemon (147.75.109.163:38296). Feb 14 01:13:02.183323 sshd[7181]: Accepted publickey for core from 147.75.109.163 port 38296 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:13:02.186059 sshd[7181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:13:02.195273 systemd-logind[1617]: New session 26 of user core. Feb 14 01:13:02.204079 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 14 01:13:02.911408 sshd[7181]: pam_unix(sshd:session): session closed for user core Feb 14 01:13:02.917665 systemd[1]: sshd@24-10.230.17.130:22-147.75.109.163:38296.service: Deactivated successfully. Feb 14 01:13:02.917725 systemd-logind[1617]: Session 26 logged out. Waiting for processes to exit. Feb 14 01:13:02.925630 systemd[1]: session-26.scope: Deactivated successfully. Feb 14 01:13:02.927246 systemd-logind[1617]: Removed session 26. Feb 14 01:13:08.062826 systemd[1]: Started sshd@25-10.230.17.130:22-147.75.109.163:38310.service - OpenSSH per-connection server daemon (147.75.109.163:38310). Feb 14 01:13:08.978340 sshd[7198]: Accepted publickey for core from 147.75.109.163 port 38310 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:13:08.981163 sshd[7198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:13:08.991252 systemd-logind[1617]: New session 27 of user core. Feb 14 01:13:08.996999 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 14 01:13:09.715383 sshd[7198]: pam_unix(sshd:session): session closed for user core Feb 14 01:13:09.720875 systemd-logind[1617]: Session 27 logged out. Waiting for processes to exit. Feb 14 01:13:09.721376 systemd[1]: sshd@25-10.230.17.130:22-147.75.109.163:38310.service: Deactivated successfully. Feb 14 01:13:09.728474 systemd[1]: session-27.scope: Deactivated successfully. Feb 14 01:13:09.731580 systemd-logind[1617]: Removed session 27. Feb 14 01:13:14.864933 systemd[1]: Started sshd@26-10.230.17.130:22-147.75.109.163:52222.service - OpenSSH per-connection server daemon (147.75.109.163:52222). Feb 14 01:13:15.770880 sshd[7215]: Accepted publickey for core from 147.75.109.163 port 52222 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:13:15.775192 sshd[7215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:13:15.786199 systemd-logind[1617]: New session 28 of user core. Feb 14 01:13:15.793819 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 14 01:13:16.542390 sshd[7215]: pam_unix(sshd:session): session closed for user core Feb 14 01:13:16.555992 systemd[1]: sshd@26-10.230.17.130:22-147.75.109.163:52222.service: Deactivated successfully. Feb 14 01:13:16.562009 systemd-logind[1617]: Session 28 logged out. Waiting for processes to exit. Feb 14 01:13:16.563338 systemd[1]: session-28.scope: Deactivated successfully. Feb 14 01:13:16.566980 systemd-logind[1617]: Removed session 28.