Feb 13 23:44:56.045387 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 23:44:56.045423 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 23:44:56.045438 kernel: BIOS-provided physical RAM map: Feb 13 23:44:56.045454 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 23:44:56.045478 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 23:44:56.045490 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 23:44:56.045502 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 13 23:44:56.045513 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 13 23:44:56.045523 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 23:44:56.045534 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 23:44:56.045545 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 23:44:56.045555 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 23:44:56.045579 kernel: NX (Execute Disable) protection: active Feb 13 23:44:56.045592 kernel: APIC: Static calls initialized Feb 13 23:44:56.045605 kernel: SMBIOS 2.8 present. Feb 13 23:44:56.045622 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 13 23:44:56.045635 kernel: Hypervisor detected: KVM Feb 13 23:44:56.045652 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 23:44:56.045664 kernel: kvm-clock: using sched offset of 5066191849 cycles Feb 13 23:44:56.045677 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 23:44:56.045689 kernel: tsc: Detected 2499.998 MHz processor Feb 13 23:44:56.045701 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 23:44:56.045713 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 23:44:56.045724 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 13 23:44:56.045736 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 23:44:56.045770 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 23:44:56.045790 kernel: Using GB pages for direct mapping Feb 13 23:44:56.045802 kernel: ACPI: Early table checksum verification disabled Feb 13 23:44:56.045814 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 13 23:44:56.045825 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:44:56.045837 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:44:56.045849 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:44:56.045861 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 13 23:44:56.045873 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:44:56.045884 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:44:56.045900 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:44:56.045912 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:44:56.045924 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 13 23:44:56.045936 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 13 23:44:56.045948 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 13 23:44:56.045979 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 13 23:44:56.045993 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 13 23:44:56.046022 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 13 23:44:56.046034 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 13 23:44:56.046046 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 23:44:56.046064 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 23:44:56.046078 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 23:44:56.046090 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 13 23:44:56.046102 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 23:44:56.046120 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 13 23:44:56.046132 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 23:44:56.046145 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 13 23:44:56.046157 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 23:44:56.046169 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 13 23:44:56.046181 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 23:44:56.046210 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 13 23:44:56.046223 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 23:44:56.046236 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 13 23:44:56.046253 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 23:44:56.046273 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 13 23:44:56.046285 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 23:44:56.046298 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 23:44:56.046310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 13 23:44:56.046323 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 13 23:44:56.046350 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 13 23:44:56.046364 kernel: Zone ranges: Feb 13 23:44:56.046376 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 23:44:56.046389 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 13 23:44:56.046407 kernel: Normal empty Feb 13 23:44:56.046420 kernel: Movable zone start for each node Feb 13 23:44:56.046433 kernel: Early memory node ranges Feb 13 23:44:56.046445 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 23:44:56.046457 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 13 23:44:56.046481 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 13 23:44:56.046493 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 23:44:56.046506 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 23:44:56.046524 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 13 23:44:56.046538 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 23:44:56.046556 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 23:44:56.046569 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 23:44:56.046581 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 23:44:56.046594 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 23:44:56.046606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 23:44:56.046619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 23:44:56.046631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 23:44:56.046643 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 23:44:56.046656 kernel: TSC deadline timer available Feb 13 23:44:56.046673 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 13 23:44:56.046685 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 23:44:56.046698 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 23:44:56.046710 kernel: Booting paravirtualized kernel on KVM Feb 13 23:44:56.046722 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 23:44:56.046735 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 23:44:56.046747 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 23:44:56.046759 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 23:44:56.046771 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 23:44:56.046788 kernel: kvm-guest: PV spinlocks enabled Feb 13 23:44:56.046801 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 23:44:56.046814 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 23:44:56.046827 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 23:44:56.046840 kernel: random: crng init done Feb 13 23:44:56.046852 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 23:44:56.046864 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 23:44:56.046876 kernel: Fallback order for Node 0: 0 Feb 13 23:44:56.046893 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 13 23:44:56.046911 kernel: Policy zone: DMA32 Feb 13 23:44:56.046925 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 23:44:56.046937 kernel: software IO TLB: area num 16. Feb 13 23:44:56.046950 kernel: Memory: 1901536K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 194820K reserved, 0K cma-reserved) Feb 13 23:44:56.046963 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 23:44:56.046975 kernel: Kernel/User page tables isolation: enabled Feb 13 23:44:56.046988 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 23:44:56.047005 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 23:44:56.047018 kernel: Dynamic Preempt: voluntary Feb 13 23:44:56.047030 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 23:44:56.047043 kernel: rcu: RCU event tracing is enabled. Feb 13 23:44:56.047056 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 23:44:56.047068 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 23:44:56.047093 kernel: Rude variant of Tasks RCU enabled. Feb 13 23:44:56.047110 kernel: Tracing variant of Tasks RCU enabled. Feb 13 23:44:56.047123 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 23:44:56.047136 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 23:44:56.047149 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 13 23:44:56.047162 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 23:44:56.047179 kernel: Console: colour VGA+ 80x25 Feb 13 23:44:56.049221 kernel: printk: console [tty0] enabled Feb 13 23:44:56.049243 kernel: printk: console [ttyS0] enabled Feb 13 23:44:56.049257 kernel: ACPI: Core revision 20230628 Feb 13 23:44:56.049270 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 23:44:56.049283 kernel: x2apic enabled Feb 13 23:44:56.049304 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 23:44:56.049324 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 23:44:56.049339 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Feb 13 23:44:56.049352 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 23:44:56.049365 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 23:44:56.049378 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 23:44:56.049391 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 23:44:56.049404 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 23:44:56.049417 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 23:44:56.049435 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 23:44:56.049449 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 23:44:56.049471 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 23:44:56.049487 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 23:44:56.049499 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 23:44:56.049512 kernel: MMIO Stale Data: Unknown: No mitigations Feb 13 23:44:56.049525 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 13 23:44:56.049538 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 23:44:56.049551 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 23:44:56.049564 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 23:44:56.049577 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 23:44:56.049595 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 23:44:56.049609 kernel: Freeing SMP alternatives memory: 32K Feb 13 23:44:56.049627 kernel: pid_max: default: 32768 minimum: 301 Feb 13 23:44:56.049641 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 23:44:56.049654 kernel: landlock: Up and running. Feb 13 23:44:56.049667 kernel: SELinux: Initializing. Feb 13 23:44:56.049680 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 23:44:56.049693 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 23:44:56.049706 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 13 23:44:56.049719 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 23:44:56.049732 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 23:44:56.049751 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 23:44:56.049764 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 13 23:44:56.049777 kernel: signal: max sigframe size: 1776 Feb 13 23:44:56.049790 kernel: rcu: Hierarchical SRCU implementation. Feb 13 23:44:56.049804 kernel: rcu: Max phase no-delay instances is 400. Feb 13 23:44:56.049817 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 23:44:56.049830 kernel: smp: Bringing up secondary CPUs ... Feb 13 23:44:56.049843 kernel: smpboot: x86: Booting SMP configuration: Feb 13 23:44:56.049856 kernel: .... node #0, CPUs: #1 Feb 13 23:44:56.049883 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 23:44:56.049896 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 23:44:56.049909 kernel: smpboot: Max logical packages: 16 Feb 13 23:44:56.049922 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Feb 13 23:44:56.049935 kernel: devtmpfs: initialized Feb 13 23:44:56.049948 kernel: x86/mm: Memory block size: 128MB Feb 13 23:44:56.049961 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 23:44:56.049974 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 23:44:56.049987 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 23:44:56.050004 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 23:44:56.050024 kernel: audit: initializing netlink subsys (disabled) Feb 13 23:44:56.050037 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 23:44:56.050050 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 23:44:56.050063 kernel: audit: type=2000 audit(1739490294.644:1): state=initialized audit_enabled=0 res=1 Feb 13 23:44:56.050076 kernel: cpuidle: using governor menu Feb 13 23:44:56.050091 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 23:44:56.050104 kernel: dca service started, version 1.12.1 Feb 13 23:44:56.050117 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 23:44:56.050146 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 23:44:56.050159 kernel: PCI: Using configuration type 1 for base access Feb 13 23:44:56.050172 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 23:44:56.050197 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 23:44:56.050210 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 23:44:56.050247 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 23:44:56.050261 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 23:44:56.050274 kernel: ACPI: Added _OSI(Module Device) Feb 13 23:44:56.050287 kernel: ACPI: Added _OSI(Processor Device) Feb 13 23:44:56.050306 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 23:44:56.050319 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 23:44:56.050332 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 23:44:56.050345 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 23:44:56.050358 kernel: ACPI: Interpreter enabled Feb 13 23:44:56.050371 kernel: ACPI: PM: (supports S0 S5) Feb 13 23:44:56.050384 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 23:44:56.050397 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 23:44:56.050410 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 23:44:56.050428 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 23:44:56.050441 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 23:44:56.050742 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 23:44:56.050939 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 23:44:56.051128 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 23:44:56.051148 kernel: PCI host bridge to bus 0000:00 Feb 13 23:44:56.051949 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 23:44:56.052134 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 23:44:56.052337 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 23:44:56.052535 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 13 23:44:56.052704 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 23:44:56.052873 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 13 23:44:56.053041 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 23:44:56.053266 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 23:44:56.053488 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 13 23:44:56.053675 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 13 23:44:56.056287 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 13 23:44:56.056509 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 13 23:44:56.056697 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 23:44:56.056912 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 23:44:56.057116 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 13 23:44:56.057385 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 23:44:56.057583 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 13 23:44:56.057784 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 23:44:56.057974 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 13 23:44:56.060256 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 23:44:56.060488 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 13 23:44:56.060690 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 23:44:56.060875 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 13 23:44:56.061067 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 23:44:56.061278 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 13 23:44:56.061485 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 23:44:56.061678 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 13 23:44:56.061868 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 23:44:56.062047 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 13 23:44:56.064297 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 23:44:56.064522 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 23:44:56.064711 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 13 23:44:56.064896 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 13 23:44:56.065095 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 13 23:44:56.065345 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 23:44:56.065545 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 23:44:56.065724 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 13 23:44:56.065903 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 13 23:44:56.066089 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 23:44:56.067282 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 23:44:56.067518 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 23:44:56.067700 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 13 23:44:56.067878 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 13 23:44:56.068100 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 23:44:56.068299 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 23:44:56.068514 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 13 23:44:56.068713 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 13 23:44:56.068904 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 23:44:56.069090 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 23:44:56.071350 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 23:44:56.071574 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 23:44:56.071784 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 13 23:44:56.071993 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 13 23:44:56.072213 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 23:44:56.072408 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 23:44:56.072625 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 23:44:56.072816 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 13 23:44:56.073017 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 23:44:56.075265 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 23:44:56.075482 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 23:44:56.075683 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 23:44:56.075869 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 13 23:44:56.076050 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 23:44:56.078280 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 23:44:56.078503 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 23:44:56.078750 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 23:44:56.078932 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 23:44:56.079121 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 23:44:56.079357 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 23:44:56.079551 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 23:44:56.079748 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 23:44:56.079937 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 23:44:56.080115 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 23:44:56.082353 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 23:44:56.082556 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 23:44:56.082747 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 23:44:56.082924 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 23:44:56.083107 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 23:44:56.083317 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 23:44:56.083510 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 23:44:56.083531 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 23:44:56.083545 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 23:44:56.083559 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 23:44:56.083580 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 23:44:56.083594 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 23:44:56.083607 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 23:44:56.083620 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 23:44:56.083634 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 23:44:56.083647 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 23:44:56.083660 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 23:44:56.083673 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 23:44:56.083686 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 23:44:56.083704 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 23:44:56.083717 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 23:44:56.083730 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 23:44:56.083743 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 23:44:56.083757 kernel: iommu: Default domain type: Translated Feb 13 23:44:56.083770 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 23:44:56.083783 kernel: PCI: Using ACPI for IRQ routing Feb 13 23:44:56.083796 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 23:44:56.083809 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 23:44:56.083827 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 13 23:44:56.084005 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 23:44:56.084181 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 23:44:56.086417 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 23:44:56.086440 kernel: vgaarb: loaded Feb 13 23:44:56.086454 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 23:44:56.086478 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 23:44:56.086493 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 23:44:56.086506 kernel: pnp: PnP ACPI init Feb 13 23:44:56.086736 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 23:44:56.086759 kernel: pnp: PnP ACPI: found 5 devices Feb 13 23:44:56.086773 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 23:44:56.086787 kernel: NET: Registered PF_INET protocol family Feb 13 23:44:56.086800 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 23:44:56.086813 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 23:44:56.086826 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 23:44:56.086840 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 23:44:56.086861 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 23:44:56.086875 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 23:44:56.086888 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 23:44:56.086901 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 23:44:56.086915 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 23:44:56.086928 kernel: NET: Registered PF_XDP protocol family Feb 13 23:44:56.087109 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 13 23:44:56.087346 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 23:44:56.087552 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 23:44:56.087732 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 23:44:56.087913 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 23:44:56.088091 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 23:44:56.089332 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 23:44:56.089530 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 23:44:56.089722 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 23:44:56.089903 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 23:44:56.090081 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 23:44:56.090289 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 23:44:56.090479 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 23:44:56.090661 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 23:44:56.090838 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 23:44:56.091030 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 23:44:56.093292 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 23:44:56.093511 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 23:44:56.093693 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 23:44:56.093872 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 23:44:56.094049 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 23:44:56.095274 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 23:44:56.095470 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 23:44:56.095653 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 23:44:56.095842 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 23:44:56.096022 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 23:44:56.098245 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 23:44:56.098478 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 23:44:56.098660 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 23:44:56.098848 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 23:44:56.099036 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 23:44:56.099257 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 23:44:56.099498 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 23:44:56.099679 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 23:44:56.099858 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 23:44:56.100035 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 23:44:56.102254 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 23:44:56.102445 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 23:44:56.102639 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 23:44:56.102830 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 23:44:56.103012 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 23:44:56.103217 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 23:44:56.103404 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 23:44:56.103603 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 23:44:56.103796 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 23:44:56.103978 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 23:44:56.104160 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 23:44:56.106510 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 23:44:56.106706 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 23:44:56.106907 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 23:44:56.107081 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 23:44:56.107286 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 23:44:56.107454 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 23:44:56.107647 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 13 23:44:56.107813 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 23:44:56.107993 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 13 23:44:56.108182 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 23:44:56.109409 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 13 23:44:56.109595 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 23:44:56.109778 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 13 23:44:56.109969 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 13 23:44:56.110140 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 13 23:44:56.111365 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 23:44:56.111569 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 13 23:44:56.111740 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 13 23:44:56.111910 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 23:44:56.112099 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 23:44:56.112320 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 13 23:44:56.112508 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 23:44:56.112701 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 13 23:44:56.112873 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 13 23:44:56.113040 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 23:44:56.113238 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 13 23:44:56.113419 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 13 23:44:56.113606 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 23:44:56.113789 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 13 23:44:56.113962 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 13 23:44:56.114142 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 23:44:56.114344 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 13 23:44:56.114533 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 13 23:44:56.114717 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 23:44:56.114739 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 23:44:56.114754 kernel: PCI: CLS 0 bytes, default 64 Feb 13 23:44:56.114768 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 23:44:56.114783 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 13 23:44:56.114797 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 23:44:56.114811 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 23:44:56.114824 kernel: Initialise system trusted keyrings Feb 13 23:44:56.114845 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 23:44:56.114860 kernel: Key type asymmetric registered Feb 13 23:44:56.114873 kernel: Asymmetric key parser 'x509' registered Feb 13 23:44:56.114887 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 23:44:56.114901 kernel: io scheduler mq-deadline registered Feb 13 23:44:56.114915 kernel: io scheduler kyber registered Feb 13 23:44:56.114928 kernel: io scheduler bfq registered Feb 13 23:44:56.115109 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 13 23:44:56.115326 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 13 23:44:56.115529 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:44:56.115711 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 13 23:44:56.115890 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 13 23:44:56.116069 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:44:56.116296 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 13 23:44:56.116508 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 13 23:44:56.116703 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:44:56.116887 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 13 23:44:56.117067 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 13 23:44:56.117277 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:44:56.117468 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 13 23:44:56.117650 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 13 23:44:56.117838 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:44:56.118017 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 13 23:44:56.118220 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 13 23:44:56.118405 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:44:56.118605 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 13 23:44:56.118787 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 13 23:44:56.118980 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:44:56.119163 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 13 23:44:56.119405 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 13 23:44:56.119599 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:44:56.119621 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 23:44:56.119637 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 23:44:56.119660 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 23:44:56.119674 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 23:44:56.119689 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 23:44:56.119703 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 23:44:56.119716 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 23:44:56.119730 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 23:44:56.119744 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 23:44:56.119956 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 23:44:56.120127 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 23:44:56.120338 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T23:44:55 UTC (1739490295) Feb 13 23:44:56.120522 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 23:44:56.120544 kernel: intel_pstate: CPU model not supported Feb 13 23:44:56.120558 kernel: NET: Registered PF_INET6 protocol family Feb 13 23:44:56.120572 kernel: Segment Routing with IPv6 Feb 13 23:44:56.120586 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 23:44:56.120608 kernel: NET: Registered PF_PACKET protocol family Feb 13 23:44:56.120622 kernel: Key type dns_resolver registered Feb 13 23:44:56.120641 kernel: IPI shorthand broadcast: enabled Feb 13 23:44:56.120655 kernel: sched_clock: Marking stable (1658005270, 233466065)->(2029117046, -137645711) Feb 13 23:44:56.120669 kernel: registered taskstats version 1 Feb 13 23:44:56.120683 kernel: Loading compiled-in X.509 certificates Feb 13 23:44:56.120697 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 23:44:56.120710 kernel: Key type .fscrypt registered Feb 13 23:44:56.120725 kernel: Key type fscrypt-provisioning registered Feb 13 23:44:56.120738 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 23:44:56.120752 kernel: ima: Allocated hash algorithm: sha1 Feb 13 23:44:56.120771 kernel: ima: No architecture policies found Feb 13 23:44:56.120785 kernel: clk: Disabling unused clocks Feb 13 23:44:56.120798 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 23:44:56.120812 kernel: Write protecting the kernel read-only data: 36864k Feb 13 23:44:56.120826 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 23:44:56.120840 kernel: Run /init as init process Feb 13 23:44:56.120854 kernel: with arguments: Feb 13 23:44:56.120867 kernel: /init Feb 13 23:44:56.120881 kernel: with environment: Feb 13 23:44:56.120899 kernel: HOME=/ Feb 13 23:44:56.120913 kernel: TERM=linux Feb 13 23:44:56.120927 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 23:44:56.120944 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 23:44:56.120961 systemd[1]: Detected virtualization kvm. Feb 13 23:44:56.120976 systemd[1]: Detected architecture x86-64. Feb 13 23:44:56.120990 systemd[1]: Running in initrd. Feb 13 23:44:56.121004 systemd[1]: No hostname configured, using default hostname. Feb 13 23:44:56.121024 systemd[1]: Hostname set to . Feb 13 23:44:56.121039 systemd[1]: Initializing machine ID from VM UUID. Feb 13 23:44:56.121054 systemd[1]: Queued start job for default target initrd.target. Feb 13 23:44:56.121069 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 23:44:56.121084 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 23:44:56.121099 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 23:44:56.121114 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 23:44:56.121129 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 23:44:56.121149 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 23:44:56.121166 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 23:44:56.121182 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 23:44:56.121241 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 23:44:56.121258 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 23:44:56.121273 systemd[1]: Reached target paths.target - Path Units. Feb 13 23:44:56.121294 systemd[1]: Reached target slices.target - Slice Units. Feb 13 23:44:56.121309 systemd[1]: Reached target swap.target - Swaps. Feb 13 23:44:56.121324 systemd[1]: Reached target timers.target - Timer Units. Feb 13 23:44:56.121339 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 23:44:56.121354 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 23:44:56.121369 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 23:44:56.121383 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 23:44:56.121398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 23:44:56.121413 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 23:44:56.121433 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 23:44:56.121448 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 23:44:56.121475 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 23:44:56.121490 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 23:44:56.121505 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 23:44:56.121520 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 23:44:56.121535 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 23:44:56.121550 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 23:44:56.121565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:44:56.121630 systemd-journald[200]: Collecting audit messages is disabled. Feb 13 23:44:56.121665 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 23:44:56.121681 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 23:44:56.121702 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 23:44:56.121718 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 23:44:56.121733 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 23:44:56.121747 kernel: Bridge firewalling registered Feb 13 23:44:56.121767 systemd-journald[200]: Journal started Feb 13 23:44:56.121798 systemd-journald[200]: Runtime Journal (/run/log/journal/be4546bc76744021a76c0ac5efa79b09) is 4.7M, max 38.0M, 33.2M free. Feb 13 23:44:56.077086 systemd-modules-load[201]: Inserted module 'overlay' Feb 13 23:44:56.165745 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 23:44:56.109179 systemd-modules-load[201]: Inserted module 'br_netfilter' Feb 13 23:44:56.168002 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 23:44:56.170186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:44:56.182417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 23:44:56.185400 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 23:44:56.196731 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 23:44:56.198711 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 23:44:56.210364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 23:44:56.215284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:44:56.226913 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 23:44:56.229957 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:44:56.235429 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 23:44:56.240402 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 23:44:56.244264 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 23:44:56.258430 dracut-cmdline[235]: dracut-dracut-053 Feb 13 23:44:56.262213 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 23:44:56.292166 systemd-resolved[236]: Positive Trust Anchors: Feb 13 23:44:56.292201 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 23:44:56.292245 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 23:44:56.297811 systemd-resolved[236]: Defaulting to hostname 'linux'. Feb 13 23:44:56.302756 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 23:44:56.303991 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 23:44:56.374227 kernel: SCSI subsystem initialized Feb 13 23:44:56.386264 kernel: Loading iSCSI transport class v2.0-870. Feb 13 23:44:56.400221 kernel: iscsi: registered transport (tcp) Feb 13 23:44:56.425571 kernel: iscsi: registered transport (qla4xxx) Feb 13 23:44:56.425632 kernel: QLogic iSCSI HBA Driver Feb 13 23:44:56.486496 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 23:44:56.495574 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 23:44:56.530572 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 23:44:56.530693 kernel: device-mapper: uevent: version 1.0.3 Feb 13 23:44:56.533753 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 23:44:56.585324 kernel: raid6: sse2x4 gen() 7710 MB/s Feb 13 23:44:56.603323 kernel: raid6: sse2x2 gen() 5431 MB/s Feb 13 23:44:56.621975 kernel: raid6: sse2x1 gen() 5380 MB/s Feb 13 23:44:56.622103 kernel: raid6: using algorithm sse2x4 gen() 7710 MB/s Feb 13 23:44:56.641129 kernel: raid6: .... xor() 5005 MB/s, rmw enabled Feb 13 23:44:56.641296 kernel: raid6: using ssse3x2 recovery algorithm Feb 13 23:44:56.667267 kernel: xor: automatically using best checksumming function avx Feb 13 23:44:56.872518 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 23:44:56.894771 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 23:44:56.904708 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 23:44:56.933302 systemd-udevd[420]: Using default interface naming scheme 'v255'. Feb 13 23:44:56.940661 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 23:44:56.957553 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 23:44:56.981090 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Feb 13 23:44:57.027339 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 23:44:57.033398 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 23:44:57.163478 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 23:44:57.172415 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 23:44:57.210280 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 23:44:57.213784 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 23:44:57.215457 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 23:44:57.217658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 23:44:57.227322 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 23:44:57.256758 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 23:44:57.345220 kernel: ACPI: bus type USB registered Feb 13 23:44:57.355222 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 13 23:44:57.442474 kernel: usbcore: registered new interface driver usbfs Feb 13 23:44:57.442512 kernel: usbcore: registered new interface driver hub Feb 13 23:44:57.442533 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 23:44:57.442775 kernel: usbcore: registered new device driver usb Feb 13 23:44:57.442798 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 23:44:57.442817 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 23:44:57.442835 kernel: GPT:17805311 != 125829119 Feb 13 23:44:57.442853 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 23:44:57.442871 kernel: GPT:17805311 != 125829119 Feb 13 23:44:57.442888 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 23:44:57.442906 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:44:57.437176 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 23:44:57.437382 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:44:57.438705 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 23:44:57.449262 kernel: libata version 3.00 loaded. Feb 13 23:44:57.441381 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 23:44:57.441592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:44:57.443114 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:44:57.458648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:44:57.483239 kernel: AVX version of gcm_enc/dec engaged. Feb 13 23:44:57.507377 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 23:44:57.549588 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 13 23:44:57.549895 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 23:44:57.550130 kernel: AES CTR mode by8 optimization enabled Feb 13 23:44:57.550160 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 23:44:57.550457 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 13 23:44:57.550699 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 13 23:44:57.550914 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 23:44:57.566661 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 23:44:57.566702 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 23:44:57.567829 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 23:44:57.568508 kernel: hub 1-0:1.0: USB hub found Feb 13 23:44:57.568774 kernel: hub 1-0:1.0: 4 ports detected Feb 13 23:44:57.569015 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 23:44:57.571402 kernel: hub 2-0:1.0: USB hub found Feb 13 23:44:57.571685 kernel: scsi host0: ahci Feb 13 23:44:57.571967 kernel: hub 2-0:1.0: 4 ports detected Feb 13 23:44:57.572220 kernel: scsi host1: ahci Feb 13 23:44:57.572464 kernel: scsi host2: ahci Feb 13 23:44:57.572685 kernel: scsi host3: ahci Feb 13 23:44:57.572892 kernel: scsi host4: ahci Feb 13 23:44:57.573128 kernel: scsi host5: ahci Feb 13 23:44:57.576010 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Feb 13 23:44:57.576045 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Feb 13 23:44:57.576066 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Feb 13 23:44:57.576101 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Feb 13 23:44:57.576123 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Feb 13 23:44:57.576149 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Feb 13 23:44:57.576170 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Feb 13 23:44:57.558460 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 23:44:57.671029 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (479) Feb 13 23:44:57.671597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:44:57.685759 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 23:44:57.693284 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 23:44:57.699441 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 23:44:57.700312 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 23:44:57.710515 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 23:44:57.715441 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 23:44:57.718152 disk-uuid[558]: Primary Header is updated. Feb 13 23:44:57.718152 disk-uuid[558]: Secondary Entries is updated. Feb 13 23:44:57.718152 disk-uuid[558]: Secondary Header is updated. Feb 13 23:44:57.725232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:44:57.734227 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:44:57.765613 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:44:57.776271 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 23:44:57.874218 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 23:44:57.879217 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 23:44:57.879256 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 23:44:57.881490 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 23:44:57.881557 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 23:44:57.884131 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 23:44:57.922225 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 23:44:57.928630 kernel: usbcore: registered new interface driver usbhid Feb 13 23:44:57.928700 kernel: usbhid: USB HID core driver Feb 13 23:44:57.936543 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 13 23:44:57.936587 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 13 23:44:58.739245 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:44:58.739848 disk-uuid[559]: The operation has completed successfully. Feb 13 23:44:58.795009 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 23:44:58.795210 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 23:44:58.820513 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 23:44:58.826007 sh[585]: Success Feb 13 23:44:58.844672 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 13 23:44:58.904929 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 23:44:58.916355 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 23:44:58.918410 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 23:44:58.941243 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 23:44:58.941349 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:44:58.941398 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 23:44:58.943632 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 23:44:58.945342 kernel: BTRFS info (device dm-0): using free space tree Feb 13 23:44:58.955903 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 23:44:58.957531 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 23:44:58.962428 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 23:44:58.965023 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 23:44:58.991534 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 23:44:58.991622 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:44:58.991645 kernel: BTRFS info (device vda6): using free space tree Feb 13 23:44:58.997217 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 23:44:59.013395 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 23:44:59.012964 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 23:44:59.020724 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 23:44:59.031542 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 23:44:59.333698 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 23:44:59.347462 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 23:44:59.353946 ignition[686]: Ignition 2.19.0 Feb 13 23:44:59.353966 ignition[686]: Stage: fetch-offline Feb 13 23:44:59.354027 ignition[686]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:44:59.354047 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:44:59.354251 ignition[686]: parsed url from cmdline: "" Feb 13 23:44:59.359299 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 23:44:59.354258 ignition[686]: no config URL provided Feb 13 23:44:59.354269 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 23:44:59.354291 ignition[686]: no config at "/usr/lib/ignition/user.ign" Feb 13 23:44:59.354301 ignition[686]: failed to fetch config: resource requires networking Feb 13 23:44:59.354618 ignition[686]: Ignition finished successfully Feb 13 23:44:59.387226 systemd-networkd[772]: lo: Link UP Feb 13 23:44:59.387244 systemd-networkd[772]: lo: Gained carrier Feb 13 23:44:59.390261 systemd-networkd[772]: Enumeration completed Feb 13 23:44:59.390462 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 23:44:59.390915 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:44:59.390922 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 23:44:59.391423 systemd[1]: Reached target network.target - Network. Feb 13 23:44:59.392936 systemd-networkd[772]: eth0: Link UP Feb 13 23:44:59.392943 systemd-networkd[772]: eth0: Gained carrier Feb 13 23:44:59.392955 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:44:59.402464 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 23:44:59.435313 systemd-networkd[772]: eth0: DHCPv4 address 10.230.61.58/30, gateway 10.230.61.57 acquired from 10.230.61.57 Feb 13 23:44:59.445729 ignition[776]: Ignition 2.19.0 Feb 13 23:44:59.445753 ignition[776]: Stage: fetch Feb 13 23:44:59.446036 ignition[776]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:44:59.446057 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:44:59.446220 ignition[776]: parsed url from cmdline: "" Feb 13 23:44:59.446227 ignition[776]: no config URL provided Feb 13 23:44:59.446238 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 23:44:59.446256 ignition[776]: no config at "/usr/lib/ignition/user.ign" Feb 13 23:44:59.446432 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 23:44:59.446983 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 23:44:59.447025 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 23:44:59.462030 ignition[776]: GET result: OK Feb 13 23:44:59.462173 ignition[776]: parsing config with SHA512: 076e9009299a3972c995eb43b32aaeaf92604d2a18b22a007285e5192d4ca156175bf3654d4f13697cbc03c24e191a5dceac55c8dff5113bcd029145639cdf75 Feb 13 23:44:59.467272 unknown[776]: fetched base config from "system" Feb 13 23:44:59.467289 unknown[776]: fetched base config from "system" Feb 13 23:44:59.467748 ignition[776]: fetch: fetch complete Feb 13 23:44:59.467299 unknown[776]: fetched user config from "openstack" Feb 13 23:44:59.467757 ignition[776]: fetch: fetch passed Feb 13 23:44:59.471506 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 23:44:59.467826 ignition[776]: Ignition finished successfully Feb 13 23:44:59.482519 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 23:44:59.571349 ignition[784]: Ignition 2.19.0 Feb 13 23:44:59.571383 ignition[784]: Stage: kargs Feb 13 23:44:59.571672 ignition[784]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:44:59.574652 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 23:44:59.571695 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:44:59.573016 ignition[784]: kargs: kargs passed Feb 13 23:44:59.573126 ignition[784]: Ignition finished successfully Feb 13 23:44:59.581508 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 23:44:59.633980 ignition[790]: Ignition 2.19.0 Feb 13 23:44:59.634002 ignition[790]: Stage: disks Feb 13 23:44:59.634340 ignition[790]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:44:59.634377 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:44:59.639216 ignition[790]: disks: disks passed Feb 13 23:44:59.639316 ignition[790]: Ignition finished successfully Feb 13 23:44:59.642049 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 23:44:59.643554 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 23:44:59.644447 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 23:44:59.646086 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 23:44:59.647681 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 23:44:59.649284 systemd[1]: Reached target basic.target - Basic System. Feb 13 23:44:59.660524 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 23:44:59.682057 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 23:44:59.690213 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 23:44:59.696386 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 23:44:59.827221 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 23:44:59.828384 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 23:44:59.829941 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 23:44:59.843372 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 23:44:59.846336 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 23:44:59.848286 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 23:44:59.851629 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 23:44:59.853651 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 23:44:59.867555 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Feb 13 23:44:59.867593 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 23:44:59.867615 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:44:59.867636 kernel: BTRFS info (device vda6): using free space tree Feb 13 23:44:59.867656 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 23:44:59.853699 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 23:44:59.871150 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 23:44:59.874409 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 23:44:59.886677 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 23:44:59.971226 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 23:44:59.979245 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Feb 13 23:44:59.988704 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 23:44:59.996700 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 23:45:00.166453 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 23:45:00.173352 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 23:45:00.177439 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 23:45:00.193207 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 23:45:00.195806 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 23:45:00.251110 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 23:45:00.253382 ignition[926]: INFO : Ignition 2.19.0 Feb 13 23:45:00.253382 ignition[926]: INFO : Stage: mount Feb 13 23:45:00.253382 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 23:45:00.253382 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:45:00.257981 ignition[926]: INFO : mount: mount passed Feb 13 23:45:00.257981 ignition[926]: INFO : Ignition finished successfully Feb 13 23:45:00.256025 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 23:45:00.635598 systemd-networkd[772]: eth0: Gained IPv6LL Feb 13 23:45:02.144530 systemd-networkd[772]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8f4e:24:19ff:fee6:3d3a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8f4e:24:19ff:fee6:3d3a/64 assigned by NDisc. Feb 13 23:45:02.144548 systemd-networkd[772]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 23:45:07.116641 coreos-metadata[808]: Feb 13 23:45:07.116 WARN failed to locate config-drive, using the metadata service API instead Feb 13 23:45:07.139708 coreos-metadata[808]: Feb 13 23:45:07.139 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 23:45:07.153236 coreos-metadata[808]: Feb 13 23:45:07.153 INFO Fetch successful Feb 13 23:45:07.154279 coreos-metadata[808]: Feb 13 23:45:07.153 INFO wrote hostname srv-gs5j1.gb1.brightbox.com to /sysroot/etc/hostname Feb 13 23:45:07.155895 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 23:45:07.156099 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 23:45:07.171418 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 23:45:07.180954 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 23:45:07.200230 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) Feb 13 23:45:07.207545 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 23:45:07.207593 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:45:07.207614 kernel: BTRFS info (device vda6): using free space tree Feb 13 23:45:07.212228 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 23:45:07.215578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 23:45:07.253345 ignition[962]: INFO : Ignition 2.19.0 Feb 13 23:45:07.253345 ignition[962]: INFO : Stage: files Feb 13 23:45:07.255346 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 23:45:07.255346 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:45:07.255346 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Feb 13 23:45:07.258133 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 23:45:07.258133 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 23:45:07.260354 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 23:45:07.261439 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 23:45:07.261439 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 23:45:07.261098 unknown[962]: wrote ssh authorized keys file for user: core Feb 13 23:45:07.264466 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 23:45:07.264466 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 23:45:07.445352 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 23:45:07.774752 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 23:45:07.774752 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 23:45:07.777946 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 23:45:07.777946 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 23:45:07.777946 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 23:45:07.777946 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 23:45:07.777946 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 23:45:07.777946 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 23:45:07.777946 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 23:45:07.777946 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 23:45:07.777946 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 23:45:07.777946 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 23:45:07.796653 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 23:45:07.796653 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 23:45:07.796653 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 23:45:08.339088 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 23:45:09.803547 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 23:45:09.803547 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 23:45:09.814499 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 23:45:09.814499 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 23:45:09.814499 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 23:45:09.814499 ignition[962]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 23:45:09.820542 ignition[962]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 23:45:09.820542 ignition[962]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 23:45:09.820542 ignition[962]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 23:45:09.820542 ignition[962]: INFO : files: files passed Feb 13 23:45:09.820542 ignition[962]: INFO : Ignition finished successfully Feb 13 23:45:09.822025 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 23:45:09.835753 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 23:45:09.838429 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 23:45:09.862309 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 23:45:09.862524 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 23:45:09.877007 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 23:45:09.877007 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 23:45:09.881173 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 23:45:09.881798 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 23:45:09.884143 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 23:45:09.891557 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 23:45:09.942463 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 23:45:09.942694 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 23:45:09.944988 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 23:45:09.946214 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 23:45:09.948059 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 23:45:09.962592 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 23:45:09.981943 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 23:45:09.988452 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 23:45:10.011822 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 23:45:10.012947 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 23:45:10.014857 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 23:45:10.016446 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 23:45:10.016647 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 23:45:10.018611 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 23:45:10.019603 systemd[1]: Stopped target basic.target - Basic System. Feb 13 23:45:10.021124 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 23:45:10.022744 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 23:45:10.024295 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 23:45:10.026098 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 23:45:10.029619 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 23:45:10.030629 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 23:45:10.033341 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 23:45:10.034257 systemd[1]: Stopped target swap.target - Swaps. Feb 13 23:45:10.035575 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 23:45:10.035802 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 23:45:10.037653 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 23:45:10.038675 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 23:45:10.040228 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 23:45:10.040464 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 23:45:10.041928 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 23:45:10.042269 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 23:45:10.044180 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 23:45:10.044406 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 23:45:10.048186 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 23:45:10.048381 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 23:45:10.064243 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 23:45:10.067558 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 23:45:10.068340 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 23:45:10.068641 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 23:45:10.071563 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 23:45:10.072365 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 23:45:10.081035 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 23:45:10.081222 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 23:45:10.095904 ignition[1014]: INFO : Ignition 2.19.0 Feb 13 23:45:10.105487 ignition[1014]: INFO : Stage: umount Feb 13 23:45:10.105487 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 23:45:10.105487 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:45:10.105487 ignition[1014]: INFO : umount: umount passed Feb 13 23:45:10.105487 ignition[1014]: INFO : Ignition finished successfully Feb 13 23:45:10.105666 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 23:45:10.105865 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 23:45:10.108280 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 23:45:10.108444 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 23:45:10.110178 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 23:45:10.110323 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 23:45:10.111602 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 23:45:10.111710 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 23:45:10.113034 systemd[1]: Stopped target network.target - Network. Feb 13 23:45:10.114393 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 23:45:10.114496 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 23:45:10.115981 systemd[1]: Stopped target paths.target - Path Units. Feb 13 23:45:10.117399 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 23:45:10.119421 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 23:45:10.122905 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 23:45:10.124520 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 23:45:10.126128 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 23:45:10.126221 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 23:45:10.127503 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 23:45:10.127572 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 23:45:10.129094 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 23:45:10.129241 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 23:45:10.130818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 23:45:10.130913 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 23:45:10.132559 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 23:45:10.134567 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 23:45:10.138397 systemd-networkd[772]: eth0: DHCPv6 lease lost Feb 13 23:45:10.142372 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 23:45:10.142604 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 23:45:10.145559 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 23:45:10.145647 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 23:45:10.155434 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 23:45:10.157724 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 23:45:10.157847 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 23:45:10.160557 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 23:45:10.163501 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 23:45:10.163705 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 23:45:10.175978 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 23:45:10.176313 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 23:45:10.178039 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 23:45:10.178274 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 23:45:10.181433 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 23:45:10.181524 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 23:45:10.184933 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 23:45:10.185014 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 23:45:10.187840 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 23:45:10.187937 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 23:45:10.188875 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 23:45:10.188946 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 23:45:10.192451 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 23:45:10.192549 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:45:10.202913 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 23:45:10.205377 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 23:45:10.205469 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:45:10.206266 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 23:45:10.206338 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 23:45:10.207095 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 23:45:10.207166 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 23:45:10.209473 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 23:45:10.209562 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 23:45:10.210716 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 23:45:10.210790 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 23:45:10.211620 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 23:45:10.211693 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 23:45:10.213508 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 23:45:10.213591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:45:10.218139 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 23:45:10.219405 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 23:45:10.219567 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 23:45:10.221321 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 23:45:10.221493 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 23:45:10.223565 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 23:45:10.225103 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 23:45:10.225215 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 23:45:10.235481 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 23:45:10.249034 systemd[1]: Switching root. Feb 13 23:45:10.282698 systemd-journald[200]: Journal stopped Feb 13 23:45:11.846261 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Feb 13 23:45:11.846388 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 23:45:11.846415 kernel: SELinux: policy capability open_perms=1 Feb 13 23:45:11.846456 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 23:45:11.846478 kernel: SELinux: policy capability always_check_network=0 Feb 13 23:45:11.846498 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 23:45:11.846531 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 23:45:11.846553 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 23:45:11.846575 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 23:45:11.846603 kernel: audit: type=1403 audit(1739490310.520:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 23:45:11.846631 systemd[1]: Successfully loaded SELinux policy in 51.301ms. Feb 13 23:45:11.846672 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.822ms. Feb 13 23:45:11.846711 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 23:45:11.846735 systemd[1]: Detected virtualization kvm. Feb 13 23:45:11.846757 systemd[1]: Detected architecture x86-64. Feb 13 23:45:11.846777 systemd[1]: Detected first boot. Feb 13 23:45:11.846798 systemd[1]: Hostname set to . Feb 13 23:45:11.846819 systemd[1]: Initializing machine ID from VM UUID. Feb 13 23:45:11.846840 zram_generator::config[1056]: No configuration found. Feb 13 23:45:11.846875 systemd[1]: Populated /etc with preset unit settings. Feb 13 23:45:11.846910 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 23:45:11.846939 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 23:45:11.846962 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 23:45:11.846998 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 23:45:11.847037 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 23:45:11.847067 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 23:45:11.847090 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 23:45:11.847111 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 23:45:11.848722 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 23:45:11.848756 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 23:45:11.848779 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 23:45:11.848800 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 23:45:11.848822 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 23:45:11.848843 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 23:45:11.848864 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 23:45:11.848886 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 23:45:11.848907 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 23:45:11.848952 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 23:45:11.848991 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 23:45:11.849015 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 23:45:11.849037 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 23:45:11.849058 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 23:45:11.849081 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 23:45:11.849117 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 23:45:11.849147 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 23:45:11.849169 systemd[1]: Reached target slices.target - Slice Units. Feb 13 23:45:11.849205 systemd[1]: Reached target swap.target - Swaps. Feb 13 23:45:11.849231 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 23:45:11.849253 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 23:45:11.849274 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 23:45:11.849310 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 23:45:11.849364 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 23:45:11.849388 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 23:45:11.849409 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 23:45:11.849438 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 23:45:11.849477 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 23:45:11.849502 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:45:11.849523 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 23:45:11.849562 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 23:45:11.849586 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 23:45:11.849608 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 23:45:11.849629 systemd[1]: Reached target machines.target - Containers. Feb 13 23:45:11.849650 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 23:45:11.849679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:45:11.849702 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 23:45:11.849723 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 23:45:11.849759 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 23:45:11.849783 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 23:45:11.849804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 23:45:11.849826 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 23:45:11.849864 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 23:45:11.849889 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 23:45:11.849910 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 23:45:11.849930 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 23:45:11.849952 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 23:45:11.850000 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 23:45:11.850025 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 23:45:11.850046 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 23:45:11.850066 kernel: loop: module loaded Feb 13 23:45:11.850094 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 23:45:11.850116 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 23:45:11.850137 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 23:45:11.850166 kernel: fuse: init (API version 7.39) Feb 13 23:45:11.850204 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 23:45:11.850248 systemd[1]: Stopped verity-setup.service. Feb 13 23:45:11.850272 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:45:11.850301 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 23:45:11.850324 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 23:45:11.850345 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 23:45:11.850366 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 23:45:11.850403 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 23:45:11.850427 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 23:45:11.850487 systemd-journald[1145]: Collecting audit messages is disabled. Feb 13 23:45:11.850554 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 23:45:11.850579 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 23:45:11.850607 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 23:45:11.850648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 23:45:11.850673 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 23:45:11.850715 systemd-journald[1145]: Journal started Feb 13 23:45:11.850751 systemd-journald[1145]: Runtime Journal (/run/log/journal/be4546bc76744021a76c0ac5efa79b09) is 4.7M, max 38.0M, 33.2M free. Feb 13 23:45:11.437601 systemd[1]: Queued start job for default target multi-user.target. Feb 13 23:45:11.851462 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 23:45:11.459142 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 23:45:11.459870 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 23:45:11.857284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 23:45:11.857541 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 23:45:11.859128 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 23:45:11.867227 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 23:45:11.868648 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 23:45:11.868890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 23:45:11.870056 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 23:45:11.871252 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 23:45:11.872409 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 23:45:11.887045 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 23:45:11.895461 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 23:45:11.904351 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 23:45:11.905182 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 23:45:11.905275 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 23:45:11.908484 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 23:45:11.919546 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 23:45:11.922304 kernel: ACPI: bus type drm_connector registered Feb 13 23:45:11.926344 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 23:45:11.928431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:45:11.939527 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 23:45:11.944427 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 23:45:11.946119 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 23:45:11.949653 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 23:45:11.951330 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 23:45:11.956431 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 23:45:11.968441 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 23:45:11.978453 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 23:45:11.985651 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 23:45:11.986981 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 23:45:11.992407 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 23:45:11.993636 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 23:45:11.994686 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 23:45:11.995900 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 23:45:12.028428 systemd-journald[1145]: Time spent on flushing to /var/log/journal/be4546bc76744021a76c0ac5efa79b09 is 201.296ms for 1140 entries. Feb 13 23:45:12.028428 systemd-journald[1145]: System Journal (/var/log/journal/be4546bc76744021a76c0ac5efa79b09) is 8.0M, max 584.8M, 576.8M free. Feb 13 23:45:12.295616 kernel: loop0: detected capacity change from 0 to 142488 Feb 13 23:45:12.295686 systemd-journald[1145]: Received client request to flush runtime journal. Feb 13 23:45:12.295746 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 23:45:12.295785 kernel: loop1: detected capacity change from 0 to 140768 Feb 13 23:45:12.295818 kernel: loop2: detected capacity change from 0 to 210664 Feb 13 23:45:12.051253 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 23:45:12.056680 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 23:45:12.076744 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 23:45:12.078455 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:45:12.228511 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 23:45:12.230540 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 23:45:12.248842 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 23:45:12.248864 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 23:45:12.272369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 23:45:12.284410 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 23:45:12.285741 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 23:45:12.294466 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 23:45:12.299679 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 23:45:12.350450 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 23:45:12.361220 kernel: loop3: detected capacity change from 0 to 8 Feb 13 23:45:12.391232 kernel: loop4: detected capacity change from 0 to 142488 Feb 13 23:45:12.430368 kernel: loop5: detected capacity change from 0 to 140768 Feb 13 23:45:12.440535 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 23:45:12.536838 kernel: loop6: detected capacity change from 0 to 210664 Feb 13 23:45:12.554442 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 23:45:12.618274 kernel: loop7: detected capacity change from 0 to 8 Feb 13 23:45:12.611146 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 23:45:12.612282 (sd-merge)[1213]: Merged extensions into '/usr'. Feb 13 23:45:12.622046 systemd[1]: Reloading requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 23:45:12.626086 systemd[1]: Reloading... Feb 13 23:45:12.743353 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Feb 13 23:45:12.743384 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Feb 13 23:45:12.779261 zram_generator::config[1239]: No configuration found. Feb 13 23:45:13.281099 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 23:45:13.336235 ldconfig[1182]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 23:45:13.357767 systemd[1]: Reloading finished in 730 ms. Feb 13 23:45:13.413842 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 23:45:13.415708 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 23:45:13.417109 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 23:45:13.431475 systemd[1]: Starting ensure-sysext.service... Feb 13 23:45:13.442229 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 23:45:13.460471 systemd[1]: Reloading requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Feb 13 23:45:13.460493 systemd[1]: Reloading... Feb 13 23:45:13.526895 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 23:45:13.527442 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 23:45:13.530858 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 23:45:13.533011 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Feb 13 23:45:13.533149 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Feb 13 23:45:13.540992 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 23:45:13.541008 systemd-tmpfiles[1301]: Skipping /boot Feb 13 23:45:13.572019 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 23:45:13.572042 systemd-tmpfiles[1301]: Skipping /boot Feb 13 23:45:13.602260 zram_generator::config[1327]: No configuration found. Feb 13 23:45:13.859686 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 23:45:13.929724 systemd[1]: Reloading finished in 468 ms. Feb 13 23:45:13.954390 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 23:45:13.970213 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 23:45:13.991432 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 23:45:13.996998 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 23:45:14.001156 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 23:45:14.009444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 23:45:14.014472 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 23:45:14.019484 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 23:45:14.028723 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:45:14.029045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:45:14.039564 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 23:45:14.044844 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 23:45:14.056608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 23:45:14.057578 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:45:14.057748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:45:14.063064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:45:14.064619 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:45:14.064886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:45:14.086689 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 23:45:14.089296 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:45:14.094359 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 23:45:14.096091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 23:45:14.112093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 23:45:14.112364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 23:45:14.115789 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:45:14.117354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:45:14.128362 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 23:45:14.141635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 23:45:14.145449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:45:14.145669 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:45:14.149261 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 23:45:14.152637 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 23:45:14.156134 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 23:45:14.157289 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 23:45:14.175648 systemd[1]: Finished ensure-sysext.service. Feb 13 23:45:14.179555 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 23:45:14.184626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 23:45:14.185291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 23:45:14.187988 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 23:45:14.191556 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 23:45:14.191829 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 23:45:14.198310 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 23:45:14.198473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 23:45:14.207471 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 23:45:14.211472 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 23:45:14.212280 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 23:45:14.223293 augenrules[1428]: No rules Feb 13 23:45:14.224961 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 23:45:14.227982 systemd-udevd[1390]: Using default interface naming scheme 'v255'. Feb 13 23:45:14.260177 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 23:45:14.287062 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 23:45:14.303437 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 23:45:14.327333 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 23:45:14.328391 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 23:45:14.361871 systemd-resolved[1389]: Positive Trust Anchors: Feb 13 23:45:14.361903 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 23:45:14.361950 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 23:45:14.379035 systemd-resolved[1389]: Using system hostname 'srv-gs5j1.gb1.brightbox.com'. Feb 13 23:45:14.386772 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 23:45:14.387781 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 23:45:14.432395 systemd-networkd[1439]: lo: Link UP Feb 13 23:45:14.432409 systemd-networkd[1439]: lo: Gained carrier Feb 13 23:45:14.433768 systemd-networkd[1439]: Enumeration completed Feb 13 23:45:14.433923 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 23:45:14.434838 systemd[1]: Reached target network.target - Network. Feb 13 23:45:14.443395 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 23:45:14.444571 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 23:45:14.521244 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1448) Feb 13 23:45:14.574642 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:45:14.574659 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 23:45:14.583799 systemd-networkd[1439]: eth0: Link UP Feb 13 23:45:14.583824 systemd-networkd[1439]: eth0: Gained carrier Feb 13 23:45:14.583849 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:45:14.618342 systemd-networkd[1439]: eth0: DHCPv4 address 10.230.61.58/30, gateway 10.230.61.57 acquired from 10.230.61.57 Feb 13 23:45:14.622437 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 23:45:14.622167 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Feb 13 23:45:14.648237 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 23:45:14.651859 kernel: ACPI: button: Power Button [PWRF] Feb 13 23:45:14.804230 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 23:45:14.811256 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 23:45:14.819524 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 23:45:14.819865 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 23:45:14.856853 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 23:45:14.868505 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 23:45:14.899589 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:45:14.902281 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 23:45:15.095261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:45:15.118371 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 23:45:15.129590 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 23:45:15.147846 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 23:45:15.187694 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 23:45:15.188931 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 23:45:15.189728 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 23:45:15.190723 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 23:45:15.191736 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 23:45:15.193120 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 23:45:15.194145 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 23:45:15.195000 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 23:45:15.195816 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 23:45:15.195882 systemd[1]: Reached target paths.target - Path Units. Feb 13 23:45:15.196546 systemd[1]: Reached target timers.target - Timer Units. Feb 13 23:45:15.199369 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 23:45:15.202391 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 23:45:15.208636 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 23:45:15.212495 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 23:45:15.214016 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 23:45:15.215250 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 23:45:15.216077 systemd[1]: Reached target basic.target - Basic System. Feb 13 23:45:15.216996 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 23:45:15.217186 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 23:45:15.220359 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 23:45:15.225052 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 23:45:15.233459 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 23:45:15.241452 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 23:45:15.243973 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 23:45:15.251433 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 23:45:15.252258 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 23:45:15.259421 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 23:45:15.261913 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 23:45:15.265421 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 23:45:15.268973 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 23:45:15.281396 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 23:45:15.293731 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 23:45:15.301750 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 23:45:15.304460 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 23:45:15.319903 jq[1482]: false Feb 13 23:45:15.320348 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 23:45:15.323737 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 23:45:15.332245 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 23:45:15.333402 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 23:45:15.373408 extend-filesystems[1484]: Found loop4 Feb 13 23:45:15.376630 extend-filesystems[1484]: Found loop5 Feb 13 23:45:15.376630 extend-filesystems[1484]: Found loop6 Feb 13 23:45:15.387284 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 23:45:15.392957 jq[1492]: true Feb 13 23:45:15.396052 extend-filesystems[1484]: Found loop7 Feb 13 23:45:15.396052 extend-filesystems[1484]: Found vda Feb 13 23:45:15.396052 extend-filesystems[1484]: Found vda1 Feb 13 23:45:15.396052 extend-filesystems[1484]: Found vda2 Feb 13 23:45:15.396052 extend-filesystems[1484]: Found vda3 Feb 13 23:45:15.396052 extend-filesystems[1484]: Found usr Feb 13 23:45:15.396052 extend-filesystems[1484]: Found vda4 Feb 13 23:45:15.396052 extend-filesystems[1484]: Found vda6 Feb 13 23:45:15.396052 extend-filesystems[1484]: Found vda7 Feb 13 23:45:15.396052 extend-filesystems[1484]: Found vda9 Feb 13 23:45:15.396052 extend-filesystems[1484]: Checking size of /dev/vda9 Feb 13 23:45:15.390165 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 23:45:15.427995 dbus-daemon[1481]: [system] SELinux support is enabled Feb 13 23:45:15.428305 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 23:45:15.469114 dbus-daemon[1481]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1439 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 23:45:15.492072 tar[1499]: linux-amd64/helm Feb 13 23:45:15.433648 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 23:45:15.495867 jq[1508]: true Feb 13 23:45:15.433694 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 23:45:15.436332 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 23:45:15.436365 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 23:45:15.485448 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 23:45:15.486694 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 23:45:15.507278 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 23:45:15.508308 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 23:45:15.535229 extend-filesystems[1484]: Resized partition /dev/vda9 Feb 13 23:45:15.558343 extend-filesystems[1523]: resize2fs 1.47.1 (20-May-2024) Feb 13 23:45:15.559642 update_engine[1491]: I20250213 23:45:15.546113 1491 main.cc:92] Flatcar Update Engine starting Feb 13 23:45:15.567266 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 13 23:45:15.574437 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1450) Feb 13 23:45:15.574603 systemd[1]: Started update-engine.service - Update Engine. Feb 13 23:45:15.578435 update_engine[1491]: I20250213 23:45:15.577450 1491 update_check_scheduler.cc:74] Next update check in 9m16s Feb 13 23:45:15.585875 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 23:45:15.660635 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 23:45:15.660703 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 23:45:15.672898 systemd-logind[1490]: New seat seat0. Feb 13 23:45:15.690345 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 23:45:15.805516 systemd-networkd[1439]: eth0: Gained IPv6LL Feb 13 23:45:15.941434 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Feb 13 23:45:15.951459 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 23:45:15.955100 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 23:45:15.976608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:45:15.991007 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Feb 13 23:45:15.996638 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 23:45:15.998310 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 23:45:16.008619 systemd[1]: Starting sshkeys.service... Feb 13 23:45:16.039363 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 23:45:16.047655 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 23:45:16.079701 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 23:45:16.081691 dbus-daemon[1481]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1516 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 23:45:16.084483 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 23:45:16.099656 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 23:45:16.273407 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 23:45:16.286967 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 23:45:16.400647 polkitd[1548]: Started polkitd version 121 Feb 13 23:45:16.357874 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 23:45:16.427785 extend-filesystems[1523]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 23:45:16.427785 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 23:45:16.427785 extend-filesystems[1523]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 23:45:16.433451 extend-filesystems[1484]: Resized filesystem in /dev/vda9 Feb 13 23:45:16.429586 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 23:45:16.434917 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 23:45:16.503910 polkitd[1548]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 23:45:16.504021 polkitd[1548]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 23:45:16.520264 polkitd[1548]: Finished loading, compiling and executing 2 rules Feb 13 23:45:16.527922 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 23:45:16.528181 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 23:45:16.548283 polkitd[1548]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 23:45:16.615850 systemd-hostnamed[1516]: Hostname set to (static) Feb 13 23:45:16.688909 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Feb 13 23:45:16.692339 systemd-networkd[1439]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8f4e:24:19ff:fee6:3d3a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8f4e:24:19ff:fee6:3d3a/64 assigned by NDisc. Feb 13 23:45:16.692347 systemd-networkd[1439]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 23:45:16.700231 sshd_keygen[1500]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 23:45:16.713958 locksmithd[1524]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 23:45:16.880599 containerd[1511]: time="2025-02-13T23:45:16.880239600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 23:45:16.918603 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 23:45:16.942855 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 23:45:16.956720 systemd[1]: Started sshd@0-10.230.61.58:22-147.75.109.163:55154.service - OpenSSH per-connection server daemon (147.75.109.163:55154). Feb 13 23:45:16.968321 containerd[1511]: time="2025-02-13T23:45:16.967141831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:45:16.980224 containerd[1511]: time="2025-02-13T23:45:16.979958169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:45:16.980224 containerd[1511]: time="2025-02-13T23:45:16.980022010Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 23:45:16.980224 containerd[1511]: time="2025-02-13T23:45:16.980050649Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 23:45:16.982573 containerd[1511]: time="2025-02-13T23:45:16.981957679Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 23:45:16.982573 containerd[1511]: time="2025-02-13T23:45:16.982005580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 23:45:16.982573 containerd[1511]: time="2025-02-13T23:45:16.982224784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:45:16.982573 containerd[1511]: time="2025-02-13T23:45:16.982252452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:45:16.982773 containerd[1511]: time="2025-02-13T23:45:16.982576411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:45:16.982773 containerd[1511]: time="2025-02-13T23:45:16.982603069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 23:45:16.982773 containerd[1511]: time="2025-02-13T23:45:16.982624901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:45:16.982773 containerd[1511]: time="2025-02-13T23:45:16.982641571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 23:45:16.982946 containerd[1511]: time="2025-02-13T23:45:16.982893973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:45:16.984443 containerd[1511]: time="2025-02-13T23:45:16.983491326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:45:16.984443 containerd[1511]: time="2025-02-13T23:45:16.983657188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:45:16.984443 containerd[1511]: time="2025-02-13T23:45:16.983683359Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 23:45:16.984443 containerd[1511]: time="2025-02-13T23:45:16.983936819Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 23:45:16.984443 containerd[1511]: time="2025-02-13T23:45:16.984057178Z" level=info msg="metadata content store policy set" policy=shared Feb 13 23:45:17.001722 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 23:45:17.002376 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 23:45:17.010030 containerd[1511]: time="2025-02-13T23:45:17.009896910Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 23:45:17.010189 containerd[1511]: time="2025-02-13T23:45:17.010160498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 23:45:17.010355 containerd[1511]: time="2025-02-13T23:45:17.010328341Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 23:45:17.010919 containerd[1511]: time="2025-02-13T23:45:17.010438207Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 23:45:17.010919 containerd[1511]: time="2025-02-13T23:45:17.010491536Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 23:45:17.010919 containerd[1511]: time="2025-02-13T23:45:17.010799925Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 23:45:17.011607 containerd[1511]: time="2025-02-13T23:45:17.011573694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 23:45:17.014215 containerd[1511]: time="2025-02-13T23:45:17.014164616Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 23:45:17.014338 containerd[1511]: time="2025-02-13T23:45:17.014311123Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 23:45:17.014448 containerd[1511]: time="2025-02-13T23:45:17.014421503Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 23:45:17.014574 containerd[1511]: time="2025-02-13T23:45:17.014547263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 23:45:17.014687 containerd[1511]: time="2025-02-13T23:45:17.014660401Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 23:45:17.014835 containerd[1511]: time="2025-02-13T23:45:17.014794497Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 23:45:17.014948 containerd[1511]: time="2025-02-13T23:45:17.014921865Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 23:45:17.015079 containerd[1511]: time="2025-02-13T23:45:17.015052083Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 23:45:17.016313 containerd[1511]: time="2025-02-13T23:45:17.016278343Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 23:45:17.016440 containerd[1511]: time="2025-02-13T23:45:17.016412651Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 23:45:17.016509 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 23:45:17.016933 containerd[1511]: time="2025-02-13T23:45:17.016900669Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018258570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018316817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018343925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018366788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018392617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018415412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018444367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018473360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018496280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018523941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018547414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018573469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018601660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018652505Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 23:45:17.019270 containerd[1511]: time="2025-02-13T23:45:17.018708925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.020079 containerd[1511]: time="2025-02-13T23:45:17.018770593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.020079 containerd[1511]: time="2025-02-13T23:45:17.018799994Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 23:45:17.020079 containerd[1511]: time="2025-02-13T23:45:17.018932273Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 23:45:17.020079 containerd[1511]: time="2025-02-13T23:45:17.018967370Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 23:45:17.020079 containerd[1511]: time="2025-02-13T23:45:17.018986161Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 23:45:17.020079 containerd[1511]: time="2025-02-13T23:45:17.019005255Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 23:45:17.020079 containerd[1511]: time="2025-02-13T23:45:17.019022405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.020079 containerd[1511]: time="2025-02-13T23:45:17.019052375Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 23:45:17.020079 containerd[1511]: time="2025-02-13T23:45:17.019094817Z" level=info msg="NRI interface is disabled by configuration." Feb 13 23:45:17.020079 containerd[1511]: time="2025-02-13T23:45:17.019115906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 23:45:17.023968 containerd[1511]: time="2025-02-13T23:45:17.021735848Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 23:45:17.023968 containerd[1511]: time="2025-02-13T23:45:17.021870803Z" level=info msg="Connect containerd service" Feb 13 23:45:17.023968 containerd[1511]: time="2025-02-13T23:45:17.021966333Z" level=info msg="using legacy CRI server" Feb 13 23:45:17.023968 containerd[1511]: time="2025-02-13T23:45:17.021989619Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 23:45:17.023968 containerd[1511]: time="2025-02-13T23:45:17.022184806Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 23:45:17.027259 containerd[1511]: time="2025-02-13T23:45:17.025638297Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 23:45:17.031003 containerd[1511]: time="2025-02-13T23:45:17.027532875Z" level=info msg="Start subscribing containerd event" Feb 13 23:45:17.031003 containerd[1511]: time="2025-02-13T23:45:17.027645071Z" level=info msg="Start recovering state" Feb 13 23:45:17.031003 containerd[1511]: time="2025-02-13T23:45:17.027783268Z" level=info msg="Start event monitor" Feb 13 23:45:17.031003 containerd[1511]: time="2025-02-13T23:45:17.027824270Z" level=info msg="Start snapshots syncer" Feb 13 23:45:17.031003 containerd[1511]: time="2025-02-13T23:45:17.027852806Z" level=info msg="Start cni network conf syncer for default" Feb 13 23:45:17.031003 containerd[1511]: time="2025-02-13T23:45:17.027873575Z" level=info msg="Start streaming server" Feb 13 23:45:17.031003 containerd[1511]: time="2025-02-13T23:45:17.028354668Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 23:45:17.031003 containerd[1511]: time="2025-02-13T23:45:17.028451098Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 23:45:17.031003 containerd[1511]: time="2025-02-13T23:45:17.030013287Z" level=info msg="containerd successfully booted in 0.290060s" Feb 13 23:45:17.028692 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 23:45:17.129301 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 23:45:17.143661 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 23:45:17.155530 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 23:45:17.157721 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 23:45:17.707239 tar[1499]: linux-amd64/LICENSE Feb 13 23:45:17.708481 tar[1499]: linux-amd64/README.md Feb 13 23:45:17.738011 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 23:45:17.943579 sshd[1590]: Accepted publickey for core from 147.75.109.163 port 55154 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:45:17.946413 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:45:17.969295 systemd-logind[1490]: New session 1 of user core. Feb 13 23:45:17.970730 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 23:45:17.981663 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 23:45:18.043425 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 23:45:18.054088 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 23:45:18.077900 (systemd)[1606]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 23:45:18.384738 systemd[1606]: Queued start job for default target default.target. Feb 13 23:45:18.389703 systemd[1606]: Created slice app.slice - User Application Slice. Feb 13 23:45:18.389749 systemd[1606]: Reached target paths.target - Paths. Feb 13 23:45:18.389789 systemd[1606]: Reached target timers.target - Timers. Feb 13 23:45:18.395378 systemd[1606]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 23:45:18.436150 systemd[1606]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 23:45:18.436494 systemd[1606]: Reached target sockets.target - Sockets. Feb 13 23:45:18.436530 systemd[1606]: Reached target basic.target - Basic System. Feb 13 23:45:18.436793 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 23:45:18.438331 systemd[1606]: Reached target default.target - Main User Target. Feb 13 23:45:18.438429 systemd[1606]: Startup finished in 348ms. Feb 13 23:45:18.445633 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 23:45:18.550354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:45:18.556599 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Feb 13 23:45:18.566002 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 23:45:19.159725 systemd[1]: Started sshd@1-10.230.61.58:22-147.75.109.163:47420.service - OpenSSH per-connection server daemon (147.75.109.163:47420). Feb 13 23:45:19.512478 kubelet[1621]: E0213 23:45:19.512385 1621 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 23:45:19.516008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 23:45:19.516317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 23:45:19.517009 systemd[1]: kubelet.service: Consumed 2.129s CPU time. Feb 13 23:45:20.132891 sshd[1628]: Accepted publickey for core from 147.75.109.163 port 47420 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:45:20.135996 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:45:20.146693 systemd-logind[1490]: New session 2 of user core. Feb 13 23:45:20.159670 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 23:45:20.753261 sshd[1628]: pam_unix(sshd:session): session closed for user core Feb 13 23:45:20.757425 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. Feb 13 23:45:20.758041 systemd[1]: sshd@1-10.230.61.58:22-147.75.109.163:47420.service: Deactivated successfully. Feb 13 23:45:20.760342 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 23:45:20.762830 systemd-logind[1490]: Removed session 2. Feb 13 23:45:20.909984 systemd[1]: Started sshd@2-10.230.61.58:22-147.75.109.163:47430.service - OpenSSH per-connection server daemon (147.75.109.163:47430). Feb 13 23:45:21.802915 sshd[1639]: Accepted publickey for core from 147.75.109.163 port 47430 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:45:21.805405 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:45:21.812059 systemd-logind[1490]: New session 3 of user core. Feb 13 23:45:21.821639 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 23:45:22.210595 login[1599]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 23:45:22.219835 systemd-logind[1490]: New session 4 of user core. Feb 13 23:45:22.232542 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 23:45:22.245113 login[1598]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 23:45:22.252164 systemd-logind[1490]: New session 5 of user core. Feb 13 23:45:22.261896 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 23:45:22.429291 sshd[1639]: pam_unix(sshd:session): session closed for user core Feb 13 23:45:22.433487 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. Feb 13 23:45:22.434394 systemd[1]: sshd@2-10.230.61.58:22-147.75.109.163:47430.service: Deactivated successfully. Feb 13 23:45:22.436569 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 23:45:22.438979 systemd-logind[1490]: Removed session 3. Feb 13 23:45:22.562856 coreos-metadata[1480]: Feb 13 23:45:22.562 WARN failed to locate config-drive, using the metadata service API instead Feb 13 23:45:22.590489 coreos-metadata[1480]: Feb 13 23:45:22.590 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 23:45:22.597512 coreos-metadata[1480]: Feb 13 23:45:22.597 INFO Fetch failed with 404: resource not found Feb 13 23:45:22.597512 coreos-metadata[1480]: Feb 13 23:45:22.597 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 23:45:22.598219 coreos-metadata[1480]: Feb 13 23:45:22.598 INFO Fetch successful Feb 13 23:45:22.598347 coreos-metadata[1480]: Feb 13 23:45:22.598 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 23:45:22.610078 coreos-metadata[1480]: Feb 13 23:45:22.609 INFO Fetch successful Feb 13 23:45:22.610078 coreos-metadata[1480]: Feb 13 23:45:22.610 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 23:45:22.622535 coreos-metadata[1480]: Feb 13 23:45:22.622 INFO Fetch successful Feb 13 23:45:22.622535 coreos-metadata[1480]: Feb 13 23:45:22.622 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 23:45:22.637774 coreos-metadata[1480]: Feb 13 23:45:22.637 INFO Fetch successful Feb 13 23:45:22.637923 coreos-metadata[1480]: Feb 13 23:45:22.637 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 23:45:22.654788 coreos-metadata[1480]: Feb 13 23:45:22.654 INFO Fetch successful Feb 13 23:45:22.690656 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 23:45:22.692399 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 23:45:23.579997 coreos-metadata[1545]: Feb 13 23:45:23.579 WARN failed to locate config-drive, using the metadata service API instead Feb 13 23:45:23.603242 coreos-metadata[1545]: Feb 13 23:45:23.603 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 23:45:23.628175 coreos-metadata[1545]: Feb 13 23:45:23.628 INFO Fetch successful Feb 13 23:45:23.628604 coreos-metadata[1545]: Feb 13 23:45:23.628 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 23:45:23.658495 coreos-metadata[1545]: Feb 13 23:45:23.658 INFO Fetch successful Feb 13 23:45:23.663359 unknown[1545]: wrote ssh authorized keys file for user: core Feb 13 23:45:23.699799 update-ssh-keys[1679]: Updated "/home/core/.ssh/authorized_keys" Feb 13 23:45:23.700568 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 23:45:23.704467 systemd[1]: Finished sshkeys.service. Feb 13 23:45:23.705935 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 23:45:23.708319 systemd[1]: Startup finished in 1.837s (kernel) + 14.752s (initrd) + 13.238s (userspace) = 29.828s. Feb 13 23:45:29.742092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 23:45:29.756522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:45:30.021799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:45:30.037728 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 23:45:30.098798 kubelet[1690]: E0213 23:45:30.098041 1690 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 23:45:30.103834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 23:45:30.104292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 23:45:32.600668 systemd[1]: Started sshd@3-10.230.61.58:22-147.75.109.163:53948.service - OpenSSH per-connection server daemon (147.75.109.163:53948). Feb 13 23:45:33.479256 sshd[1699]: Accepted publickey for core from 147.75.109.163 port 53948 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:45:33.481491 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:45:33.488129 systemd-logind[1490]: New session 6 of user core. Feb 13 23:45:33.496504 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 23:45:34.098112 sshd[1699]: pam_unix(sshd:session): session closed for user core Feb 13 23:45:34.102576 systemd[1]: sshd@3-10.230.61.58:22-147.75.109.163:53948.service: Deactivated successfully. Feb 13 23:45:34.104684 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 23:45:34.106669 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Feb 13 23:45:34.108419 systemd-logind[1490]: Removed session 6. Feb 13 23:45:34.261897 systemd[1]: Started sshd@4-10.230.61.58:22-147.75.109.163:53954.service - OpenSSH per-connection server daemon (147.75.109.163:53954). Feb 13 23:45:35.141118 sshd[1706]: Accepted publickey for core from 147.75.109.163 port 53954 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:45:35.143305 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:45:35.149728 systemd-logind[1490]: New session 7 of user core. Feb 13 23:45:35.160504 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 23:45:35.753664 sshd[1706]: pam_unix(sshd:session): session closed for user core Feb 13 23:45:35.758600 systemd[1]: sshd@4-10.230.61.58:22-147.75.109.163:53954.service: Deactivated successfully. Feb 13 23:45:35.760768 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 23:45:35.761776 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Feb 13 23:45:35.763381 systemd-logind[1490]: Removed session 7. Feb 13 23:45:35.920763 systemd[1]: Started sshd@5-10.230.61.58:22-147.75.109.163:53970.service - OpenSSH per-connection server daemon (147.75.109.163:53970). Feb 13 23:45:36.804585 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 53970 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:45:36.806729 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:45:36.814867 systemd-logind[1490]: New session 8 of user core. Feb 13 23:45:36.820531 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 23:45:37.426833 sshd[1713]: pam_unix(sshd:session): session closed for user core Feb 13 23:45:37.431608 systemd[1]: sshd@5-10.230.61.58:22-147.75.109.163:53970.service: Deactivated successfully. Feb 13 23:45:37.433676 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 23:45:37.434559 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Feb 13 23:45:37.436262 systemd-logind[1490]: Removed session 8. Feb 13 23:45:37.584854 systemd[1]: Started sshd@6-10.230.61.58:22-147.75.109.163:53982.service - OpenSSH per-connection server daemon (147.75.109.163:53982). Feb 13 23:45:38.460900 sshd[1720]: Accepted publickey for core from 147.75.109.163 port 53982 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:45:38.463089 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:45:38.470383 systemd-logind[1490]: New session 9 of user core. Feb 13 23:45:38.480419 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 23:45:38.945609 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 23:45:38.946080 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:45:38.963796 sudo[1723]: pam_unix(sudo:session): session closed for user root Feb 13 23:45:39.106561 sshd[1720]: pam_unix(sshd:session): session closed for user core Feb 13 23:45:39.111702 systemd[1]: sshd@6-10.230.61.58:22-147.75.109.163:53982.service: Deactivated successfully. Feb 13 23:45:39.114229 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 23:45:39.115343 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Feb 13 23:45:39.116959 systemd-logind[1490]: Removed session 9. Feb 13 23:45:39.260529 systemd[1]: Started sshd@7-10.230.61.58:22-147.75.109.163:58538.service - OpenSSH per-connection server daemon (147.75.109.163:58538). Feb 13 23:45:40.157806 sshd[1728]: Accepted publickey for core from 147.75.109.163 port 58538 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:45:40.160590 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:45:40.162777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 23:45:40.169484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:45:40.173404 systemd-logind[1490]: New session 10 of user core. Feb 13 23:45:40.176587 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 23:45:40.484505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:45:40.484808 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 23:45:40.617849 kubelet[1739]: E0213 23:45:40.617739 1739 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 23:45:40.620051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 23:45:40.620348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 23:45:40.636040 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 23:45:40.636573 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:45:40.641837 sudo[1748]: pam_unix(sudo:session): session closed for user root Feb 13 23:45:40.650481 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 23:45:40.650933 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:45:40.671592 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 23:45:40.674481 auditctl[1751]: No rules Feb 13 23:45:40.675039 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 23:45:40.675409 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 23:45:40.693878 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 23:45:40.726574 augenrules[1769]: No rules Feb 13 23:45:40.727572 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 23:45:40.730176 sudo[1747]: pam_unix(sudo:session): session closed for user root Feb 13 23:45:40.874113 sshd[1728]: pam_unix(sshd:session): session closed for user core Feb 13 23:45:40.877935 systemd[1]: sshd@7-10.230.61.58:22-147.75.109.163:58538.service: Deactivated successfully. Feb 13 23:45:40.880076 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 23:45:40.881836 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Feb 13 23:45:40.883569 systemd-logind[1490]: Removed session 10. Feb 13 23:45:41.029541 systemd[1]: Started sshd@8-10.230.61.58:22-147.75.109.163:58554.service - OpenSSH per-connection server daemon (147.75.109.163:58554). Feb 13 23:45:41.913474 sshd[1777]: Accepted publickey for core from 147.75.109.163 port 58554 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:45:41.915873 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:45:41.922941 systemd-logind[1490]: New session 11 of user core. Feb 13 23:45:41.933430 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 23:45:42.389393 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 23:45:42.389859 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:45:43.106550 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 23:45:43.122835 (dockerd)[1796]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 23:45:43.960953 dockerd[1796]: time="2025-02-13T23:45:43.960448419Z" level=info msg="Starting up" Feb 13 23:45:44.202862 dockerd[1796]: time="2025-02-13T23:45:44.202545045Z" level=info msg="Loading containers: start." Feb 13 23:45:44.367240 kernel: Initializing XFRM netlink socket Feb 13 23:45:44.404469 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Feb 13 23:45:44.475778 systemd-networkd[1439]: docker0: Link UP Feb 13 23:45:44.497336 dockerd[1796]: time="2025-02-13T23:45:44.497179780Z" level=info msg="Loading containers: done." Feb 13 23:45:44.527329 dockerd[1796]: time="2025-02-13T23:45:44.527093793Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 23:45:44.527645 dockerd[1796]: time="2025-02-13T23:45:44.527612261Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 23:45:44.527978 dockerd[1796]: time="2025-02-13T23:45:44.527948734Z" level=info msg="Daemon has completed initialization" Feb 13 23:45:45.324916 systemd-resolved[1389]: Clock change detected. Flushing caches. Feb 13 23:45:45.324921 systemd-timesyncd[1425]: Contacted time server [2a00:1098:0:86:1000:67:0:1]:123 (2.flatcar.pool.ntp.org). Feb 13 23:45:45.325017 systemd-timesyncd[1425]: Initial clock synchronization to Thu 2025-02-13 23:45:45.324579 UTC. Feb 13 23:45:45.371498 dockerd[1796]: time="2025-02-13T23:45:45.370002931Z" level=info msg="API listen on /run/docker.sock" Feb 13 23:45:45.370927 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 23:45:46.951126 containerd[1511]: time="2025-02-13T23:45:46.950896776Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 23:45:47.552936 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 23:45:47.852735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount207624250.mount: Deactivated successfully. Feb 13 23:45:50.969585 containerd[1511]: time="2025-02-13T23:45:50.969509184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:50.975476 containerd[1511]: time="2025-02-13T23:45:50.975428397Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678222" Feb 13 23:45:50.977170 containerd[1511]: time="2025-02-13T23:45:50.977090220Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:50.982451 containerd[1511]: time="2025-02-13T23:45:50.981799794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:50.983583 containerd[1511]: time="2025-02-13T23:45:50.983536020Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 4.032493794s" Feb 13 23:45:50.983684 containerd[1511]: time="2025-02-13T23:45:50.983613776Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 23:45:51.018688 containerd[1511]: time="2025-02-13T23:45:51.018625645Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 23:45:51.525187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 23:45:51.541618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:45:51.700340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:45:51.712807 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 23:45:51.823194 kubelet[2015]: E0213 23:45:51.822377 2015 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 23:45:51.824758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 23:45:51.825049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 23:45:54.186220 containerd[1511]: time="2025-02-13T23:45:54.184559237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:54.186220 containerd[1511]: time="2025-02-13T23:45:54.186149237Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611553" Feb 13 23:45:54.188067 containerd[1511]: time="2025-02-13T23:45:54.188000750Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:54.193056 containerd[1511]: time="2025-02-13T23:45:54.192979760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:54.194923 containerd[1511]: time="2025-02-13T23:45:54.194736536Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 3.175762288s" Feb 13 23:45:54.194923 containerd[1511]: time="2025-02-13T23:45:54.194791200Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 23:45:54.228657 containerd[1511]: time="2025-02-13T23:45:54.228607653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 23:45:56.517753 containerd[1511]: time="2025-02-13T23:45:56.517683848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:56.519204 containerd[1511]: time="2025-02-13T23:45:56.519137015Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782138" Feb 13 23:45:56.520275 containerd[1511]: time="2025-02-13T23:45:56.520193030Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:56.525434 containerd[1511]: time="2025-02-13T23:45:56.525379263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:56.527465 containerd[1511]: time="2025-02-13T23:45:56.527424143Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 2.298760611s" Feb 13 23:45:56.527788 containerd[1511]: time="2025-02-13T23:45:56.527586591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 23:45:56.557070 containerd[1511]: time="2025-02-13T23:45:56.557006034Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 23:45:58.448993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2989149393.mount: Deactivated successfully. Feb 13 23:45:59.371231 containerd[1511]: time="2025-02-13T23:45:59.371121546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:59.373327 containerd[1511]: time="2025-02-13T23:45:59.373270402Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057866" Feb 13 23:45:59.376031 containerd[1511]: time="2025-02-13T23:45:59.375920616Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:59.379065 containerd[1511]: time="2025-02-13T23:45:59.378982686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:45:59.380614 containerd[1511]: time="2025-02-13T23:45:59.380328033Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.823260354s" Feb 13 23:45:59.380614 containerd[1511]: time="2025-02-13T23:45:59.380398842Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 23:45:59.417709 containerd[1511]: time="2025-02-13T23:45:59.417656784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 23:46:00.064591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1452670462.mount: Deactivated successfully. Feb 13 23:46:01.623263 containerd[1511]: time="2025-02-13T23:46:01.623193805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:01.625331 containerd[1511]: time="2025-02-13T23:46:01.624629106Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Feb 13 23:46:01.628887 containerd[1511]: time="2025-02-13T23:46:01.628842521Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:01.634137 containerd[1511]: time="2025-02-13T23:46:01.634053841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:01.635921 containerd[1511]: time="2025-02-13T23:46:01.635748097Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.218031114s" Feb 13 23:46:01.635921 containerd[1511]: time="2025-02-13T23:46:01.635798395Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 23:46:01.669978 containerd[1511]: time="2025-02-13T23:46:01.669928927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 23:46:01.677664 update_engine[1491]: I20250213 23:46:01.677472 1491 update_attempter.cc:509] Updating boot flags... Feb 13 23:46:01.738154 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2108) Feb 13 23:46:01.830817 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2107) Feb 13 23:46:01.836156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 23:46:01.844221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:46:02.030061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:46:02.036292 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 23:46:02.154927 kubelet[2123]: E0213 23:46:02.154844 2123 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 23:46:02.157336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 23:46:02.157606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 23:46:02.544415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount566260511.mount: Deactivated successfully. Feb 13 23:46:02.568309 containerd[1511]: time="2025-02-13T23:46:02.568034194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:02.569500 containerd[1511]: time="2025-02-13T23:46:02.569396312Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Feb 13 23:46:02.573576 containerd[1511]: time="2025-02-13T23:46:02.572889743Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:02.578563 containerd[1511]: time="2025-02-13T23:46:02.578484856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:02.580288 containerd[1511]: time="2025-02-13T23:46:02.579679480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 909.692777ms" Feb 13 23:46:02.580288 containerd[1511]: time="2025-02-13T23:46:02.579731000Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 23:46:02.614607 containerd[1511]: time="2025-02-13T23:46:02.614398511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 23:46:03.271706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307003196.mount: Deactivated successfully. Feb 13 23:46:06.968683 containerd[1511]: time="2025-02-13T23:46:06.968608547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:06.973012 containerd[1511]: time="2025-02-13T23:46:06.972929989Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Feb 13 23:46:06.976138 containerd[1511]: time="2025-02-13T23:46:06.976064695Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:06.982064 containerd[1511]: time="2025-02-13T23:46:06.981993245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:06.984478 containerd[1511]: time="2025-02-13T23:46:06.983636700Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.369129472s" Feb 13 23:46:06.984478 containerd[1511]: time="2025-02-13T23:46:06.983694594Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 23:46:11.189872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:46:11.201656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:46:11.234705 systemd[1]: Reloading requested from client PID 2248 ('systemctl') (unit session-11.scope)... Feb 13 23:46:11.234766 systemd[1]: Reloading... Feb 13 23:46:11.388683 zram_generator::config[2287]: No configuration found. Feb 13 23:46:11.594967 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 23:46:11.705810 systemd[1]: Reloading finished in 469 ms. Feb 13 23:46:11.773778 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 23:46:11.773923 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 23:46:11.774470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:46:11.782866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:46:11.960455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:46:11.971762 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 23:46:12.078198 kubelet[2353]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 23:46:12.079350 kubelet[2353]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 23:46:12.079350 kubelet[2353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 23:46:12.079350 kubelet[2353]: I0213 23:46:12.078836 2353 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 23:46:12.627081 kubelet[2353]: I0213 23:46:12.627009 2353 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 23:46:12.627081 kubelet[2353]: I0213 23:46:12.627061 2353 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 23:46:12.627428 kubelet[2353]: I0213 23:46:12.627404 2353 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 23:46:12.650149 kubelet[2353]: I0213 23:46:12.649845 2353 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 23:46:12.656264 kubelet[2353]: E0213 23:46:12.656133 2353 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.61.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:12.674299 kubelet[2353]: I0213 23:46:12.674241 2353 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 23:46:12.677273 kubelet[2353]: I0213 23:46:12.677181 2353 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 23:46:12.679010 kubelet[2353]: I0213 23:46:12.677274 2353 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gs5j1.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 23:46:12.680472 kubelet[2353]: I0213 23:46:12.680440 2353 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 23:46:12.680472 kubelet[2353]: I0213 23:46:12.680474 2353 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 23:46:12.682373 kubelet[2353]: I0213 23:46:12.682319 2353 state_mem.go:36] "Initialized new in-memory state store" Feb 13 23:46:12.684414 kubelet[2353]: W0213 23:46:12.684288 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.61.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gs5j1.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:12.684414 kubelet[2353]: E0213 23:46:12.684379 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.61.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gs5j1.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:12.685324 kubelet[2353]: I0213 23:46:12.685286 2353 kubelet.go:400] "Attempting to sync node with API server" Feb 13 23:46:12.685406 kubelet[2353]: I0213 23:46:12.685337 2353 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 23:46:12.685458 kubelet[2353]: I0213 23:46:12.685413 2353 kubelet.go:312] "Adding apiserver pod source" Feb 13 23:46:12.687268 kubelet[2353]: I0213 23:46:12.685506 2353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 23:46:12.688925 kubelet[2353]: W0213 23:46:12.688866 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.61.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:12.689027 kubelet[2353]: E0213 23:46:12.688930 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.61.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:12.689483 kubelet[2353]: I0213 23:46:12.689450 2353 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 23:46:12.692040 kubelet[2353]: I0213 23:46:12.691400 2353 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 23:46:12.692040 kubelet[2353]: W0213 23:46:12.691528 2353 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 23:46:12.692642 kubelet[2353]: I0213 23:46:12.692602 2353 server.go:1264] "Started kubelet" Feb 13 23:46:12.697161 kubelet[2353]: I0213 23:46:12.696546 2353 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 23:46:12.698198 kubelet[2353]: I0213 23:46:12.698100 2353 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 23:46:12.698724 kubelet[2353]: I0213 23:46:12.698693 2353 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 23:46:12.702910 kubelet[2353]: I0213 23:46:12.702875 2353 server.go:455] "Adding debug handlers to kubelet server" Feb 13 23:46:12.704924 kubelet[2353]: I0213 23:46:12.704896 2353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 23:46:12.706927 kubelet[2353]: E0213 23:46:12.706762 2353 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.61.58:6443/api/v1/namespaces/default/events\": dial tcp 10.230.61.58:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-gs5j1.gb1.brightbox.com.1823e9471a8d5467 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-gs5j1.gb1.brightbox.com,UID:srv-gs5j1.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-gs5j1.gb1.brightbox.com,},FirstTimestamp:2025-02-13 23:46:12.692563047 +0000 UTC m=+0.714353995,LastTimestamp:2025-02-13 23:46:12.692563047 +0000 UTC m=+0.714353995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-gs5j1.gb1.brightbox.com,}" Feb 13 23:46:12.712059 kubelet[2353]: I0213 23:46:12.711980 2353 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 23:46:12.715369 kubelet[2353]: I0213 23:46:12.715338 2353 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 23:46:12.715717 kubelet[2353]: I0213 23:46:12.715694 2353 reconciler.go:26] "Reconciler: start to sync state" Feb 13 23:46:12.717037 kubelet[2353]: W0213 23:46:12.716310 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.61.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:12.717037 kubelet[2353]: E0213 23:46:12.716378 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.61.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:12.724037 kubelet[2353]: E0213 23:46:12.723988 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.61.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gs5j1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.61.58:6443: connect: connection refused" interval="200ms" Feb 13 23:46:12.725751 kubelet[2353]: I0213 23:46:12.725055 2353 factory.go:221] Registration of the systemd container factory successfully Feb 13 23:46:12.725751 kubelet[2353]: I0213 23:46:12.725178 2353 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 23:46:12.725751 kubelet[2353]: E0213 23:46:12.725407 2353 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 23:46:12.728179 kubelet[2353]: I0213 23:46:12.728145 2353 factory.go:221] Registration of the containerd container factory successfully Feb 13 23:46:12.740755 kubelet[2353]: I0213 23:46:12.740700 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 23:46:12.742458 kubelet[2353]: I0213 23:46:12.742429 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 23:46:12.742646 kubelet[2353]: I0213 23:46:12.742622 2353 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 23:46:12.742779 kubelet[2353]: I0213 23:46:12.742758 2353 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 23:46:12.742983 kubelet[2353]: E0213 23:46:12.742946 2353 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 23:46:12.753314 kubelet[2353]: W0213 23:46:12.753201 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.61.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:12.753435 kubelet[2353]: E0213 23:46:12.753354 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.61.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:12.769606 kubelet[2353]: I0213 23:46:12.769564 2353 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 23:46:12.769606 kubelet[2353]: I0213 23:46:12.769591 2353 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 23:46:12.769779 kubelet[2353]: I0213 23:46:12.769624 2353 state_mem.go:36] "Initialized new in-memory state store" Feb 13 23:46:12.775895 kubelet[2353]: I0213 23:46:12.775836 2353 policy_none.go:49] "None policy: Start" Feb 13 23:46:12.776879 kubelet[2353]: I0213 23:46:12.776840 2353 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 23:46:12.776879 kubelet[2353]: I0213 23:46:12.776879 2353 state_mem.go:35] "Initializing new in-memory state store" Feb 13 23:46:12.788308 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 23:46:12.806293 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 23:46:12.811530 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 23:46:12.819230 kubelet[2353]: I0213 23:46:12.818971 2353 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 23:46:12.819366 kubelet[2353]: I0213 23:46:12.819306 2353 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 23:46:12.820026 kubelet[2353]: I0213 23:46:12.819500 2353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 23:46:12.825984 kubelet[2353]: E0213 23:46:12.825938 2353 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-gs5j1.gb1.brightbox.com\" not found" Feb 13 23:46:12.827569 kubelet[2353]: I0213 23:46:12.827512 2353 kubelet_node_status.go:73] "Attempting to register node" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.828088 kubelet[2353]: E0213 23:46:12.828045 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.61.58:6443/api/v1/nodes\": dial tcp 10.230.61.58:6443: connect: connection refused" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.843260 kubelet[2353]: I0213 23:46:12.843187 2353 topology_manager.go:215] "Topology Admit Handler" podUID="9d08f05d6d8a8cfc8cbc85f9e48e0324" podNamespace="kube-system" podName="kube-apiserver-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.845915 kubelet[2353]: I0213 23:46:12.845880 2353 topology_manager.go:215] "Topology Admit Handler" podUID="8713938662a8800b9c1fc8388e0c379b" podNamespace="kube-system" podName="kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.848233 kubelet[2353]: I0213 23:46:12.848198 2353 topology_manager.go:215] "Topology Admit Handler" podUID="ea1e0c3f2c1ef08d13c5000be28554a4" podNamespace="kube-system" podName="kube-scheduler-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.859037 systemd[1]: Created slice kubepods-burstable-pod9d08f05d6d8a8cfc8cbc85f9e48e0324.slice - libcontainer container kubepods-burstable-pod9d08f05d6d8a8cfc8cbc85f9e48e0324.slice. Feb 13 23:46:12.885380 systemd[1]: Created slice kubepods-burstable-pod8713938662a8800b9c1fc8388e0c379b.slice - libcontainer container kubepods-burstable-pod8713938662a8800b9c1fc8388e0c379b.slice. Feb 13 23:46:12.892578 systemd[1]: Created slice kubepods-burstable-podea1e0c3f2c1ef08d13c5000be28554a4.slice - libcontainer container kubepods-burstable-podea1e0c3f2c1ef08d13c5000be28554a4.slice. Feb 13 23:46:12.916416 kubelet[2353]: I0213 23:46:12.916206 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d08f05d6d8a8cfc8cbc85f9e48e0324-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gs5j1.gb1.brightbox.com\" (UID: \"9d08f05d6d8a8cfc8cbc85f9e48e0324\") " pod="kube-system/kube-apiserver-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.916416 kubelet[2353]: I0213 23:46:12.916299 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8713938662a8800b9c1fc8388e0c379b-ca-certs\") pod \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" (UID: \"8713938662a8800b9c1fc8388e0c379b\") " pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.916416 kubelet[2353]: I0213 23:46:12.916339 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8713938662a8800b9c1fc8388e0c379b-kubeconfig\") pod \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" (UID: \"8713938662a8800b9c1fc8388e0c379b\") " pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.916416 kubelet[2353]: I0213 23:46:12.916372 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8713938662a8800b9c1fc8388e0c379b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" (UID: \"8713938662a8800b9c1fc8388e0c379b\") " pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.916416 kubelet[2353]: I0213 23:46:12.916401 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d08f05d6d8a8cfc8cbc85f9e48e0324-k8s-certs\") pod \"kube-apiserver-srv-gs5j1.gb1.brightbox.com\" (UID: \"9d08f05d6d8a8cfc8cbc85f9e48e0324\") " pod="kube-system/kube-apiserver-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.916782 kubelet[2353]: I0213 23:46:12.916431 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8713938662a8800b9c1fc8388e0c379b-flexvolume-dir\") pod \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" (UID: \"8713938662a8800b9c1fc8388e0c379b\") " pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.916782 kubelet[2353]: I0213 23:46:12.916459 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8713938662a8800b9c1fc8388e0c379b-k8s-certs\") pod \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" (UID: \"8713938662a8800b9c1fc8388e0c379b\") " pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.916782 kubelet[2353]: I0213 23:46:12.916539 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea1e0c3f2c1ef08d13c5000be28554a4-kubeconfig\") pod \"kube-scheduler-srv-gs5j1.gb1.brightbox.com\" (UID: \"ea1e0c3f2c1ef08d13c5000be28554a4\") " pod="kube-system/kube-scheduler-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.916782 kubelet[2353]: I0213 23:46:12.916574 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d08f05d6d8a8cfc8cbc85f9e48e0324-ca-certs\") pod \"kube-apiserver-srv-gs5j1.gb1.brightbox.com\" (UID: \"9d08f05d6d8a8cfc8cbc85f9e48e0324\") " pod="kube-system/kube-apiserver-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:12.925423 kubelet[2353]: E0213 23:46:12.925368 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.61.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gs5j1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.61.58:6443: connect: connection refused" interval="400ms" Feb 13 23:46:13.031011 kubelet[2353]: I0213 23:46:13.030955 2353 kubelet_node_status.go:73] "Attempting to register node" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:13.031537 kubelet[2353]: E0213 23:46:13.031503 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.61.58:6443/api/v1/nodes\": dial tcp 10.230.61.58:6443: connect: connection refused" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:13.181504 containerd[1511]: time="2025-02-13T23:46:13.181364391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gs5j1.gb1.brightbox.com,Uid:9d08f05d6d8a8cfc8cbc85f9e48e0324,Namespace:kube-system,Attempt:0,}" Feb 13 23:46:13.190091 containerd[1511]: time="2025-02-13T23:46:13.189921896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gs5j1.gb1.brightbox.com,Uid:8713938662a8800b9c1fc8388e0c379b,Namespace:kube-system,Attempt:0,}" Feb 13 23:46:13.196924 containerd[1511]: time="2025-02-13T23:46:13.196885328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gs5j1.gb1.brightbox.com,Uid:ea1e0c3f2c1ef08d13c5000be28554a4,Namespace:kube-system,Attempt:0,}" Feb 13 23:46:13.326378 kubelet[2353]: E0213 23:46:13.326244 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.61.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gs5j1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.61.58:6443: connect: connection refused" interval="800ms" Feb 13 23:46:13.436952 kubelet[2353]: I0213 23:46:13.436326 2353 kubelet_node_status.go:73] "Attempting to register node" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:13.436952 kubelet[2353]: E0213 23:46:13.436748 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.61.58:6443/api/v1/nodes\": dial tcp 10.230.61.58:6443: connect: connection refused" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:13.792185 kubelet[2353]: W0213 23:46:13.792015 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.61.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:13.792185 kubelet[2353]: E0213 23:46:13.792114 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.61.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:13.826474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274204324.mount: Deactivated successfully. Feb 13 23:46:13.837194 containerd[1511]: time="2025-02-13T23:46:13.837115578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:46:13.839173 containerd[1511]: time="2025-02-13T23:46:13.839095270Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 23:46:13.839933 containerd[1511]: time="2025-02-13T23:46:13.839887502Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:46:13.842277 containerd[1511]: time="2025-02-13T23:46:13.842176968Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:46:13.844283 containerd[1511]: time="2025-02-13T23:46:13.843982182Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:46:13.845520 containerd[1511]: time="2025-02-13T23:46:13.845455810Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 23:46:13.846726 containerd[1511]: time="2025-02-13T23:46:13.846660949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 23:46:13.848546 containerd[1511]: time="2025-02-13T23:46:13.848462558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:46:13.853372 containerd[1511]: time="2025-02-13T23:46:13.852129660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 654.827995ms" Feb 13 23:46:13.855639 containerd[1511]: time="2025-02-13T23:46:13.855478152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.435359ms" Feb 13 23:46:13.857289 containerd[1511]: time="2025-02-13T23:46:13.857062637Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 667.055728ms" Feb 13 23:46:13.864666 kubelet[2353]: W0213 23:46:13.864573 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.61.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:13.864666 kubelet[2353]: E0213 23:46:13.864665 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.61.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:13.999647 kubelet[2353]: E0213 23:46:13.999412 2353 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.61.58:6443/api/v1/namespaces/default/events\": dial tcp 10.230.61.58:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-gs5j1.gb1.brightbox.com.1823e9471a8d5467 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-gs5j1.gb1.brightbox.com,UID:srv-gs5j1.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-gs5j1.gb1.brightbox.com,},FirstTimestamp:2025-02-13 23:46:12.692563047 +0000 UTC m=+0.714353995,LastTimestamp:2025-02-13 23:46:12.692563047 +0000 UTC m=+0.714353995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-gs5j1.gb1.brightbox.com,}" Feb 13 23:46:14.144779 containerd[1511]: time="2025-02-13T23:46:14.141918179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:46:14.144779 containerd[1511]: time="2025-02-13T23:46:14.142046113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:46:14.144779 containerd[1511]: time="2025-02-13T23:46:14.142073580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:14.144779 containerd[1511]: time="2025-02-13T23:46:14.142203187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:14.145169 kubelet[2353]: E0213 23:46:14.142820 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.61.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gs5j1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.61.58:6443: connect: connection refused" interval="1.6s" Feb 13 23:46:14.169278 containerd[1511]: time="2025-02-13T23:46:14.165003473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:46:14.169278 containerd[1511]: time="2025-02-13T23:46:14.165086033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:46:14.169278 containerd[1511]: time="2025-02-13T23:46:14.165127226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:14.169278 containerd[1511]: time="2025-02-13T23:46:14.165271228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:14.174655 containerd[1511]: time="2025-02-13T23:46:14.174239756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:46:14.174655 containerd[1511]: time="2025-02-13T23:46:14.174351824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:46:14.174655 containerd[1511]: time="2025-02-13T23:46:14.174378149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:14.174655 containerd[1511]: time="2025-02-13T23:46:14.174497890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:14.208623 systemd[1]: Started cri-containerd-bd4dc120cb0d0aca19cec3bff071a6e74fbaa6aefb871e6fab37ab79b14a8b1a.scope - libcontainer container bd4dc120cb0d0aca19cec3bff071a6e74fbaa6aefb871e6fab37ab79b14a8b1a. Feb 13 23:46:14.218501 systemd[1]: Started cri-containerd-c7fa2dc8dcb608cfdd2ba86f537874427c016c6b293834cb0b60be930d0f124a.scope - libcontainer container c7fa2dc8dcb608cfdd2ba86f537874427c016c6b293834cb0b60be930d0f124a. Feb 13 23:46:14.229501 systemd[1]: Started cri-containerd-05f2f6d44c2e103ec81e05358d21fbd25054b074ff4c0d35a96bc4d1439ad5ef.scope - libcontainer container 05f2f6d44c2e103ec81e05358d21fbd25054b074ff4c0d35a96bc4d1439ad5ef. Feb 13 23:46:14.243140 kubelet[2353]: I0213 23:46:14.243078 2353 kubelet_node_status.go:73] "Attempting to register node" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:14.244671 kubelet[2353]: E0213 23:46:14.244594 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.61.58:6443/api/v1/nodes\": dial tcp 10.230.61.58:6443: connect: connection refused" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:14.254115 kubelet[2353]: W0213 23:46:14.254009 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.61.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gs5j1.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:14.254115 kubelet[2353]: E0213 23:46:14.254106 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.61.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gs5j1.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:14.304282 kubelet[2353]: W0213 23:46:14.303336 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.61.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:14.304282 kubelet[2353]: E0213 23:46:14.303400 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.61.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:14.350944 containerd[1511]: time="2025-02-13T23:46:14.349683768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gs5j1.gb1.brightbox.com,Uid:ea1e0c3f2c1ef08d13c5000be28554a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7fa2dc8dcb608cfdd2ba86f537874427c016c6b293834cb0b60be930d0f124a\"" Feb 13 23:46:14.360285 containerd[1511]: time="2025-02-13T23:46:14.360214469Z" level=info msg="CreateContainer within sandbox \"c7fa2dc8dcb608cfdd2ba86f537874427c016c6b293834cb0b60be930d0f124a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 23:46:14.369191 containerd[1511]: time="2025-02-13T23:46:14.369000210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gs5j1.gb1.brightbox.com,Uid:8713938662a8800b9c1fc8388e0c379b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd4dc120cb0d0aca19cec3bff071a6e74fbaa6aefb871e6fab37ab79b14a8b1a\"" Feb 13 23:46:14.370271 containerd[1511]: time="2025-02-13T23:46:14.370219555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gs5j1.gb1.brightbox.com,Uid:9d08f05d6d8a8cfc8cbc85f9e48e0324,Namespace:kube-system,Attempt:0,} returns sandbox id \"05f2f6d44c2e103ec81e05358d21fbd25054b074ff4c0d35a96bc4d1439ad5ef\"" Feb 13 23:46:14.380429 containerd[1511]: time="2025-02-13T23:46:14.380373088Z" level=info msg="CreateContainer within sandbox \"bd4dc120cb0d0aca19cec3bff071a6e74fbaa6aefb871e6fab37ab79b14a8b1a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 23:46:14.382158 containerd[1511]: time="2025-02-13T23:46:14.382027283Z" level=info msg="CreateContainer within sandbox \"05f2f6d44c2e103ec81e05358d21fbd25054b074ff4c0d35a96bc4d1439ad5ef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 23:46:14.402502 containerd[1511]: time="2025-02-13T23:46:14.402308536Z" level=info msg="CreateContainer within sandbox \"c7fa2dc8dcb608cfdd2ba86f537874427c016c6b293834cb0b60be930d0f124a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3736bd5281c46e81b79c43e4b99cfa9da1a38fed3980c49672c2786605a4dfaf\"" Feb 13 23:46:14.405000 containerd[1511]: time="2025-02-13T23:46:14.404940299Z" level=info msg="StartContainer for \"3736bd5281c46e81b79c43e4b99cfa9da1a38fed3980c49672c2786605a4dfaf\"" Feb 13 23:46:14.422775 containerd[1511]: time="2025-02-13T23:46:14.422467461Z" level=info msg="CreateContainer within sandbox \"05f2f6d44c2e103ec81e05358d21fbd25054b074ff4c0d35a96bc4d1439ad5ef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1d3c267461a8955b5dc1e79b012ae396f9eecf662dbc02f8d5c25250ff358f94\"" Feb 13 23:46:14.426104 containerd[1511]: time="2025-02-13T23:46:14.426043304Z" level=info msg="StartContainer for \"1d3c267461a8955b5dc1e79b012ae396f9eecf662dbc02f8d5c25250ff358f94\"" Feb 13 23:46:14.431280 containerd[1511]: time="2025-02-13T23:46:14.429513857Z" level=info msg="CreateContainer within sandbox \"bd4dc120cb0d0aca19cec3bff071a6e74fbaa6aefb871e6fab37ab79b14a8b1a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1c471a85e4b903d7b983f34e78c25e8aee82885d697f855f9854fea4bb028527\"" Feb 13 23:46:14.434522 containerd[1511]: time="2025-02-13T23:46:14.434476332Z" level=info msg="StartContainer for \"1c471a85e4b903d7b983f34e78c25e8aee82885d697f855f9854fea4bb028527\"" Feb 13 23:46:14.460488 systemd[1]: Started cri-containerd-3736bd5281c46e81b79c43e4b99cfa9da1a38fed3980c49672c2786605a4dfaf.scope - libcontainer container 3736bd5281c46e81b79c43e4b99cfa9da1a38fed3980c49672c2786605a4dfaf. Feb 13 23:46:14.487087 systemd[1]: Started cri-containerd-1d3c267461a8955b5dc1e79b012ae396f9eecf662dbc02f8d5c25250ff358f94.scope - libcontainer container 1d3c267461a8955b5dc1e79b012ae396f9eecf662dbc02f8d5c25250ff358f94. Feb 13 23:46:14.506993 systemd[1]: Started cri-containerd-1c471a85e4b903d7b983f34e78c25e8aee82885d697f855f9854fea4bb028527.scope - libcontainer container 1c471a85e4b903d7b983f34e78c25e8aee82885d697f855f9854fea4bb028527. Feb 13 23:46:14.568738 containerd[1511]: time="2025-02-13T23:46:14.568667717Z" level=info msg="StartContainer for \"1d3c267461a8955b5dc1e79b012ae396f9eecf662dbc02f8d5c25250ff358f94\" returns successfully" Feb 13 23:46:14.605014 containerd[1511]: time="2025-02-13T23:46:14.604715407Z" level=info msg="StartContainer for \"3736bd5281c46e81b79c43e4b99cfa9da1a38fed3980c49672c2786605a4dfaf\" returns successfully" Feb 13 23:46:14.629282 containerd[1511]: time="2025-02-13T23:46:14.628911878Z" level=info msg="StartContainer for \"1c471a85e4b903d7b983f34e78c25e8aee82885d697f855f9854fea4bb028527\" returns successfully" Feb 13 23:46:14.795953 kubelet[2353]: E0213 23:46:14.795906 2353 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.61.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.61.58:6443: connect: connection refused Feb 13 23:46:15.847670 kubelet[2353]: I0213 23:46:15.847628 2353 kubelet_node_status.go:73] "Attempting to register node" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:17.170343 kubelet[2353]: E0213 23:46:17.170276 2353 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-gs5j1.gb1.brightbox.com\" not found" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:17.258107 kubelet[2353]: I0213 23:46:17.258055 2353 kubelet_node_status.go:76] "Successfully registered node" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:17.692929 kubelet[2353]: I0213 23:46:17.692377 2353 apiserver.go:52] "Watching apiserver" Feb 13 23:46:17.715952 kubelet[2353]: I0213 23:46:17.715908 2353 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 23:46:18.713851 kubelet[2353]: W0213 23:46:18.713793 2353 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 23:46:19.508286 kubelet[2353]: W0213 23:46:19.507017 2353 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 23:46:19.519365 systemd[1]: Reloading requested from client PID 2633 ('systemctl') (unit session-11.scope)... Feb 13 23:46:19.519393 systemd[1]: Reloading... Feb 13 23:46:19.662413 zram_generator::config[2681]: No configuration found. Feb 13 23:46:19.834186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 23:46:19.964581 systemd[1]: Reloading finished in 444 ms. Feb 13 23:46:20.032956 kubelet[2353]: I0213 23:46:20.032914 2353 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 23:46:20.035349 kubelet[2353]: E0213 23:46:20.032844 2353 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{srv-gs5j1.gb1.brightbox.com.1823e9471a8d5467 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-gs5j1.gb1.brightbox.com,UID:srv-gs5j1.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-gs5j1.gb1.brightbox.com,},FirstTimestamp:2025-02-13 23:46:12.692563047 +0000 UTC m=+0.714353995,LastTimestamp:2025-02-13 23:46:12.692563047 +0000 UTC m=+0.714353995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-gs5j1.gb1.brightbox.com,}" Feb 13 23:46:20.033195 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:46:20.044752 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 23:46:20.045265 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:46:20.045377 systemd[1]: kubelet.service: Consumed 1.191s CPU time, 112.6M memory peak, 0B memory swap peak. Feb 13 23:46:20.058679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:46:20.312374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:46:20.327793 (kubelet)[2736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 23:46:20.415707 kubelet[2736]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 23:46:20.416295 kubelet[2736]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 23:46:20.416295 kubelet[2736]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 23:46:20.416295 kubelet[2736]: I0213 23:46:20.415920 2736 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 23:46:20.425166 kubelet[2736]: I0213 23:46:20.425111 2736 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 23:46:20.425166 kubelet[2736]: I0213 23:46:20.425150 2736 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 23:46:20.425712 kubelet[2736]: I0213 23:46:20.425678 2736 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 23:46:20.427919 kubelet[2736]: I0213 23:46:20.427885 2736 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 23:46:20.431150 kubelet[2736]: I0213 23:46:20.430487 2736 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 23:46:20.449796 kubelet[2736]: I0213 23:46:20.449761 2736 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 23:46:20.450576 kubelet[2736]: I0213 23:46:20.450511 2736 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 23:46:20.451214 kubelet[2736]: I0213 23:46:20.450707 2736 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gs5j1.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 23:46:20.451749 kubelet[2736]: I0213 23:46:20.451513 2736 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 23:46:20.451749 kubelet[2736]: I0213 23:46:20.451540 2736 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 23:46:20.451749 kubelet[2736]: I0213 23:46:20.451625 2736 state_mem.go:36] "Initialized new in-memory state store" Feb 13 23:46:20.452438 kubelet[2736]: I0213 23:46:20.451983 2736 kubelet.go:400] "Attempting to sync node with API server" Feb 13 23:46:20.452438 kubelet[2736]: I0213 23:46:20.452016 2736 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 23:46:20.452438 kubelet[2736]: I0213 23:46:20.452075 2736 kubelet.go:312] "Adding apiserver pod source" Feb 13 23:46:20.452438 kubelet[2736]: I0213 23:46:20.452325 2736 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 23:46:20.458882 kubelet[2736]: I0213 23:46:20.458843 2736 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 23:46:20.463939 kubelet[2736]: I0213 23:46:20.463908 2736 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 23:46:20.464726 kubelet[2736]: I0213 23:46:20.464703 2736 server.go:1264] "Started kubelet" Feb 13 23:46:20.478274 kubelet[2736]: I0213 23:46:20.478092 2736 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 23:46:20.485298 kubelet[2736]: I0213 23:46:20.484840 2736 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 23:46:20.486004 kubelet[2736]: I0213 23:46:20.485897 2736 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 23:46:20.486612 kubelet[2736]: I0213 23:46:20.486589 2736 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 23:46:20.493015 kubelet[2736]: I0213 23:46:20.492984 2736 server.go:455] "Adding debug handlers to kubelet server" Feb 13 23:46:20.505929 kubelet[2736]: I0213 23:46:20.505885 2736 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 23:46:20.507715 kubelet[2736]: I0213 23:46:20.507017 2736 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 23:46:20.508124 kubelet[2736]: I0213 23:46:20.507929 2736 reconciler.go:26] "Reconciler: start to sync state" Feb 13 23:46:20.511677 kubelet[2736]: I0213 23:46:20.511541 2736 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 23:46:20.520318 kubelet[2736]: E0213 23:46:20.519200 2736 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 23:46:20.522369 kubelet[2736]: I0213 23:46:20.521894 2736 factory.go:221] Registration of the containerd container factory successfully Feb 13 23:46:20.522522 kubelet[2736]: I0213 23:46:20.522502 2736 factory.go:221] Registration of the systemd container factory successfully Feb 13 23:46:20.538548 kubelet[2736]: I0213 23:46:20.538482 2736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 23:46:20.540395 kubelet[2736]: I0213 23:46:20.540355 2736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 23:46:20.540506 kubelet[2736]: I0213 23:46:20.540406 2736 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 23:46:20.540506 kubelet[2736]: I0213 23:46:20.540460 2736 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 23:46:20.540624 kubelet[2736]: E0213 23:46:20.540527 2736 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 23:46:20.618337 kubelet[2736]: I0213 23:46:20.616187 2736 kubelet_node_status.go:73] "Attempting to register node" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.633906 kubelet[2736]: I0213 23:46:20.633808 2736 kubelet_node_status.go:112] "Node was previously registered" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.634094 kubelet[2736]: I0213 23:46:20.633937 2736 kubelet_node_status.go:76] "Successfully registered node" node="srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.641618 kubelet[2736]: E0213 23:46:20.640737 2736 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 23:46:20.641618 kubelet[2736]: I0213 23:46:20.640979 2736 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 23:46:20.641618 kubelet[2736]: I0213 23:46:20.640997 2736 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 23:46:20.641618 kubelet[2736]: I0213 23:46:20.641036 2736 state_mem.go:36] "Initialized new in-memory state store" Feb 13 23:46:20.641618 kubelet[2736]: I0213 23:46:20.641482 2736 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 23:46:20.641618 kubelet[2736]: I0213 23:46:20.641506 2736 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 23:46:20.641618 kubelet[2736]: I0213 23:46:20.641554 2736 policy_none.go:49] "None policy: Start" Feb 13 23:46:20.645289 kubelet[2736]: I0213 23:46:20.644558 2736 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 23:46:20.645289 kubelet[2736]: I0213 23:46:20.644620 2736 state_mem.go:35] "Initializing new in-memory state store" Feb 13 23:46:20.645289 kubelet[2736]: I0213 23:46:20.644870 2736 state_mem.go:75] "Updated machine memory state" Feb 13 23:46:20.662986 kubelet[2736]: I0213 23:46:20.662399 2736 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 23:46:20.666230 kubelet[2736]: I0213 23:46:20.665505 2736 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 23:46:20.670671 kubelet[2736]: I0213 23:46:20.670224 2736 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 23:46:20.841701 kubelet[2736]: I0213 23:46:20.841602 2736 topology_manager.go:215] "Topology Admit Handler" podUID="ea1e0c3f2c1ef08d13c5000be28554a4" podNamespace="kube-system" podName="kube-scheduler-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.843506 kubelet[2736]: I0213 23:46:20.842192 2736 topology_manager.go:215] "Topology Admit Handler" podUID="9d08f05d6d8a8cfc8cbc85f9e48e0324" podNamespace="kube-system" podName="kube-apiserver-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.843506 kubelet[2736]: I0213 23:46:20.843008 2736 topology_manager.go:215] "Topology Admit Handler" podUID="8713938662a8800b9c1fc8388e0c379b" podNamespace="kube-system" podName="kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.851235 kubelet[2736]: W0213 23:46:20.850751 2736 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 23:46:20.853815 kubelet[2736]: W0213 23:46:20.853725 2736 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 23:46:20.853941 kubelet[2736]: E0213 23:46:20.853813 2736 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-gs5j1.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.854585 kubelet[2736]: W0213 23:46:20.854140 2736 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 23:46:20.854585 kubelet[2736]: E0213 23:46:20.854239 2736 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.912791 kubelet[2736]: I0213 23:46:20.911082 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8713938662a8800b9c1fc8388e0c379b-ca-certs\") pod \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" (UID: \"8713938662a8800b9c1fc8388e0c379b\") " pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.912791 kubelet[2736]: I0213 23:46:20.911143 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8713938662a8800b9c1fc8388e0c379b-k8s-certs\") pod \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" (UID: \"8713938662a8800b9c1fc8388e0c379b\") " pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.912791 kubelet[2736]: I0213 23:46:20.911176 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d08f05d6d8a8cfc8cbc85f9e48e0324-k8s-certs\") pod \"kube-apiserver-srv-gs5j1.gb1.brightbox.com\" (UID: \"9d08f05d6d8a8cfc8cbc85f9e48e0324\") " pod="kube-system/kube-apiserver-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.912791 kubelet[2736]: I0213 23:46:20.911211 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8713938662a8800b9c1fc8388e0c379b-flexvolume-dir\") pod \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" (UID: \"8713938662a8800b9c1fc8388e0c379b\") " pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.912791 kubelet[2736]: I0213 23:46:20.911258 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8713938662a8800b9c1fc8388e0c379b-kubeconfig\") pod \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" (UID: \"8713938662a8800b9c1fc8388e0c379b\") " pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.913174 kubelet[2736]: I0213 23:46:20.911292 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8713938662a8800b9c1fc8388e0c379b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" (UID: \"8713938662a8800b9c1fc8388e0c379b\") " pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.913174 kubelet[2736]: I0213 23:46:20.911354 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea1e0c3f2c1ef08d13c5000be28554a4-kubeconfig\") pod \"kube-scheduler-srv-gs5j1.gb1.brightbox.com\" (UID: \"ea1e0c3f2c1ef08d13c5000be28554a4\") " pod="kube-system/kube-scheduler-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.913174 kubelet[2736]: I0213 23:46:20.911401 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d08f05d6d8a8cfc8cbc85f9e48e0324-ca-certs\") pod \"kube-apiserver-srv-gs5j1.gb1.brightbox.com\" (UID: \"9d08f05d6d8a8cfc8cbc85f9e48e0324\") " pod="kube-system/kube-apiserver-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:20.913174 kubelet[2736]: I0213 23:46:20.911501 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d08f05d6d8a8cfc8cbc85f9e48e0324-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gs5j1.gb1.brightbox.com\" (UID: \"9d08f05d6d8a8cfc8cbc85f9e48e0324\") " pod="kube-system/kube-apiserver-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:21.456321 kubelet[2736]: I0213 23:46:21.456041 2736 apiserver.go:52] "Watching apiserver" Feb 13 23:46:21.508031 kubelet[2736]: I0213 23:46:21.507952 2736 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 23:46:21.600287 kubelet[2736]: W0213 23:46:21.599163 2736 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 23:46:21.600287 kubelet[2736]: W0213 23:46:21.599229 2736 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 23:46:21.600287 kubelet[2736]: W0213 23:46:21.599283 2736 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 23:46:21.600287 kubelet[2736]: E0213 23:46:21.599351 2736 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-gs5j1.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:21.600287 kubelet[2736]: E0213 23:46:21.599891 2736 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-srv-gs5j1.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:21.600287 kubelet[2736]: E0213 23:46:21.600187 2736 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-gs5j1.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-gs5j1.gb1.brightbox.com" Feb 13 23:46:21.739141 kubelet[2736]: I0213 23:46:21.738872 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-gs5j1.gb1.brightbox.com" podStartSLOduration=2.738830288 podStartE2EDuration="2.738830288s" podCreationTimestamp="2025-02-13 23:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 23:46:21.677560751 +0000 UTC m=+1.343176117" watchObservedRunningTime="2025-02-13 23:46:21.738830288 +0000 UTC m=+1.404445642" Feb 13 23:46:21.773643 kubelet[2736]: I0213 23:46:21.773565 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-gs5j1.gb1.brightbox.com" podStartSLOduration=3.773539873 podStartE2EDuration="3.773539873s" podCreationTimestamp="2025-02-13 23:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 23:46:21.739844285 +0000 UTC m=+1.405459623" watchObservedRunningTime="2025-02-13 23:46:21.773539873 +0000 UTC m=+1.439155241" Feb 13 23:46:21.813663 kubelet[2736]: I0213 23:46:21.811942 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-gs5j1.gb1.brightbox.com" podStartSLOduration=1.811917175 podStartE2EDuration="1.811917175s" podCreationTimestamp="2025-02-13 23:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 23:46:21.778369744 +0000 UTC m=+1.443985104" watchObservedRunningTime="2025-02-13 23:46:21.811917175 +0000 UTC m=+1.477532540" Feb 13 23:46:26.714328 sudo[1780]: pam_unix(sudo:session): session closed for user root Feb 13 23:46:26.859926 sshd[1777]: pam_unix(sshd:session): session closed for user core Feb 13 23:46:26.864695 systemd[1]: sshd@8-10.230.61.58:22-147.75.109.163:58554.service: Deactivated successfully. Feb 13 23:46:26.867192 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 23:46:26.867519 systemd[1]: session-11.scope: Consumed 6.985s CPU time, 189.3M memory peak, 0B memory swap peak. Feb 13 23:46:26.869327 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Feb 13 23:46:26.871724 systemd-logind[1490]: Removed session 11. Feb 13 23:46:35.317826 kubelet[2736]: I0213 23:46:35.317779 2736 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 23:46:35.319482 containerd[1511]: time="2025-02-13T23:46:35.319401227Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 23:46:35.320074 kubelet[2736]: I0213 23:46:35.319742 2736 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 23:46:36.051976 kubelet[2736]: I0213 23:46:36.051112 2736 topology_manager.go:215] "Topology Admit Handler" podUID="3a0239e8-9d31-49dc-b96d-6dcab5b09687" podNamespace="kube-system" podName="kube-proxy-c8w4w" Feb 13 23:46:36.070221 systemd[1]: Created slice kubepods-besteffort-pod3a0239e8_9d31_49dc_b96d_6dcab5b09687.slice - libcontainer container kubepods-besteffort-pod3a0239e8_9d31_49dc_b96d_6dcab5b09687.slice. Feb 13 23:46:36.112330 kubelet[2736]: I0213 23:46:36.111417 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfq4h\" (UniqueName: \"kubernetes.io/projected/3a0239e8-9d31-49dc-b96d-6dcab5b09687-kube-api-access-wfq4h\") pod \"kube-proxy-c8w4w\" (UID: \"3a0239e8-9d31-49dc-b96d-6dcab5b09687\") " pod="kube-system/kube-proxy-c8w4w" Feb 13 23:46:36.112330 kubelet[2736]: I0213 23:46:36.111479 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a0239e8-9d31-49dc-b96d-6dcab5b09687-kube-proxy\") pod \"kube-proxy-c8w4w\" (UID: \"3a0239e8-9d31-49dc-b96d-6dcab5b09687\") " pod="kube-system/kube-proxy-c8w4w" Feb 13 23:46:36.112330 kubelet[2736]: I0213 23:46:36.111523 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a0239e8-9d31-49dc-b96d-6dcab5b09687-xtables-lock\") pod \"kube-proxy-c8w4w\" (UID: \"3a0239e8-9d31-49dc-b96d-6dcab5b09687\") " pod="kube-system/kube-proxy-c8w4w" Feb 13 23:46:36.112330 kubelet[2736]: I0213 23:46:36.111553 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a0239e8-9d31-49dc-b96d-6dcab5b09687-lib-modules\") pod \"kube-proxy-c8w4w\" (UID: \"3a0239e8-9d31-49dc-b96d-6dcab5b09687\") " pod="kube-system/kube-proxy-c8w4w" Feb 13 23:46:36.223378 kubelet[2736]: E0213 23:46:36.223199 2736 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 23:46:36.223378 kubelet[2736]: E0213 23:46:36.223289 2736 projected.go:200] Error preparing data for projected volume kube-api-access-wfq4h for pod kube-system/kube-proxy-c8w4w: configmap "kube-root-ca.crt" not found Feb 13 23:46:36.223622 kubelet[2736]: E0213 23:46:36.223463 2736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3a0239e8-9d31-49dc-b96d-6dcab5b09687-kube-api-access-wfq4h podName:3a0239e8-9d31-49dc-b96d-6dcab5b09687 nodeName:}" failed. No retries permitted until 2025-02-13 23:46:36.723375525 +0000 UTC m=+16.388990875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wfq4h" (UniqueName: "kubernetes.io/projected/3a0239e8-9d31-49dc-b96d-6dcab5b09687-kube-api-access-wfq4h") pod "kube-proxy-c8w4w" (UID: "3a0239e8-9d31-49dc-b96d-6dcab5b09687") : configmap "kube-root-ca.crt" not found Feb 13 23:46:36.436610 kubelet[2736]: I0213 23:46:36.434596 2736 topology_manager.go:215] "Topology Admit Handler" podUID="5071061e-bca4-4e23-9b14-d2bc3d9b6853" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-vgcgb" Feb 13 23:46:36.447341 systemd[1]: Created slice kubepods-besteffort-pod5071061e_bca4_4e23_9b14_d2bc3d9b6853.slice - libcontainer container kubepods-besteffort-pod5071061e_bca4_4e23_9b14_d2bc3d9b6853.slice. Feb 13 23:46:36.514497 kubelet[2736]: I0213 23:46:36.514388 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hnzg\" (UniqueName: \"kubernetes.io/projected/5071061e-bca4-4e23-9b14-d2bc3d9b6853-kube-api-access-5hnzg\") pod \"tigera-operator-7bc55997bb-vgcgb\" (UID: \"5071061e-bca4-4e23-9b14-d2bc3d9b6853\") " pod="tigera-operator/tigera-operator-7bc55997bb-vgcgb" Feb 13 23:46:36.514497 kubelet[2736]: I0213 23:46:36.514493 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5071061e-bca4-4e23-9b14-d2bc3d9b6853-var-lib-calico\") pod \"tigera-operator-7bc55997bb-vgcgb\" (UID: \"5071061e-bca4-4e23-9b14-d2bc3d9b6853\") " pod="tigera-operator/tigera-operator-7bc55997bb-vgcgb" Feb 13 23:46:36.756792 containerd[1511]: time="2025-02-13T23:46:36.756628285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-vgcgb,Uid:5071061e-bca4-4e23-9b14-d2bc3d9b6853,Namespace:tigera-operator,Attempt:0,}" Feb 13 23:46:36.804292 containerd[1511]: time="2025-02-13T23:46:36.803708647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:46:36.804292 containerd[1511]: time="2025-02-13T23:46:36.803857770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:46:36.804292 containerd[1511]: time="2025-02-13T23:46:36.803893006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:36.804292 containerd[1511]: time="2025-02-13T23:46:36.804062411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:36.850545 systemd[1]: Started cri-containerd-c8f371108d60016eea785d52183707f0f3af2d5ac1f50f8f94b2c65fa14fee82.scope - libcontainer container c8f371108d60016eea785d52183707f0f3af2d5ac1f50f8f94b2c65fa14fee82. Feb 13 23:46:36.915038 containerd[1511]: time="2025-02-13T23:46:36.914912169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-vgcgb,Uid:5071061e-bca4-4e23-9b14-d2bc3d9b6853,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c8f371108d60016eea785d52183707f0f3af2d5ac1f50f8f94b2c65fa14fee82\"" Feb 13 23:46:36.920348 containerd[1511]: time="2025-02-13T23:46:36.919891046Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 23:46:36.985376 containerd[1511]: time="2025-02-13T23:46:36.985169122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8w4w,Uid:3a0239e8-9d31-49dc-b96d-6dcab5b09687,Namespace:kube-system,Attempt:0,}" Feb 13 23:46:37.021449 containerd[1511]: time="2025-02-13T23:46:37.021171622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:46:37.021449 containerd[1511]: time="2025-02-13T23:46:37.021309558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:46:37.022594 containerd[1511]: time="2025-02-13T23:46:37.021335491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:37.022594 containerd[1511]: time="2025-02-13T23:46:37.021541496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:37.051545 systemd[1]: Started cri-containerd-3ccfacee4ccb33c7b51556012556a54ec222abe41b9894e671af7fcaf668b357.scope - libcontainer container 3ccfacee4ccb33c7b51556012556a54ec222abe41b9894e671af7fcaf668b357. Feb 13 23:46:37.088654 containerd[1511]: time="2025-02-13T23:46:37.088428969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8w4w,Uid:3a0239e8-9d31-49dc-b96d-6dcab5b09687,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ccfacee4ccb33c7b51556012556a54ec222abe41b9894e671af7fcaf668b357\"" Feb 13 23:46:37.094820 containerd[1511]: time="2025-02-13T23:46:37.094478430Z" level=info msg="CreateContainer within sandbox \"3ccfacee4ccb33c7b51556012556a54ec222abe41b9894e671af7fcaf668b357\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 23:46:37.130456 containerd[1511]: time="2025-02-13T23:46:37.130346584Z" level=info msg="CreateContainer within sandbox \"3ccfacee4ccb33c7b51556012556a54ec222abe41b9894e671af7fcaf668b357\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b828c6b81e75a7afe56b75a8d3680fd6c3bd3d8272cda92cefe24ad8b466999\"" Feb 13 23:46:37.132294 containerd[1511]: time="2025-02-13T23:46:37.131733926Z" level=info msg="StartContainer for \"8b828c6b81e75a7afe56b75a8d3680fd6c3bd3d8272cda92cefe24ad8b466999\"" Feb 13 23:46:37.171493 systemd[1]: Started cri-containerd-8b828c6b81e75a7afe56b75a8d3680fd6c3bd3d8272cda92cefe24ad8b466999.scope - libcontainer container 8b828c6b81e75a7afe56b75a8d3680fd6c3bd3d8272cda92cefe24ad8b466999. Feb 13 23:46:37.223614 containerd[1511]: time="2025-02-13T23:46:37.223563997Z" level=info msg="StartContainer for \"8b828c6b81e75a7afe56b75a8d3680fd6c3bd3d8272cda92cefe24ad8b466999\" returns successfully" Feb 13 23:46:37.636212 systemd[1]: run-containerd-runc-k8s.io-c8f371108d60016eea785d52183707f0f3af2d5ac1f50f8f94b2c65fa14fee82-runc.cT8rgG.mount: Deactivated successfully. Feb 13 23:46:39.411209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount513771325.mount: Deactivated successfully. Feb 13 23:46:40.423791 containerd[1511]: time="2025-02-13T23:46:40.423690105Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:40.426309 containerd[1511]: time="2025-02-13T23:46:40.426181512Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 23:46:40.430339 containerd[1511]: time="2025-02-13T23:46:40.429746885Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:40.434829 containerd[1511]: time="2025-02-13T23:46:40.434786631Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:40.435964 containerd[1511]: time="2025-02-13T23:46:40.435907589Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.515942092s" Feb 13 23:46:40.436064 containerd[1511]: time="2025-02-13T23:46:40.435976055Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 23:46:40.452714 containerd[1511]: time="2025-02-13T23:46:40.452459276Z" level=info msg="CreateContainer within sandbox \"c8f371108d60016eea785d52183707f0f3af2d5ac1f50f8f94b2c65fa14fee82\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 23:46:40.505641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3169217892.mount: Deactivated successfully. Feb 13 23:46:40.508459 containerd[1511]: time="2025-02-13T23:46:40.508410891Z" level=info msg="CreateContainer within sandbox \"c8f371108d60016eea785d52183707f0f3af2d5ac1f50f8f94b2c65fa14fee82\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"adc6f0db0e0cae8d091e885b4a1beff2516401701b62d0eeecaef694b26e6361\"" Feb 13 23:46:40.510392 containerd[1511]: time="2025-02-13T23:46:40.509112441Z" level=info msg="StartContainer for \"adc6f0db0e0cae8d091e885b4a1beff2516401701b62d0eeecaef694b26e6361\"" Feb 13 23:46:40.565514 systemd[1]: Started cri-containerd-adc6f0db0e0cae8d091e885b4a1beff2516401701b62d0eeecaef694b26e6361.scope - libcontainer container adc6f0db0e0cae8d091e885b4a1beff2516401701b62d0eeecaef694b26e6361. Feb 13 23:46:40.608767 containerd[1511]: time="2025-02-13T23:46:40.608666098Z" level=info msg="StartContainer for \"adc6f0db0e0cae8d091e885b4a1beff2516401701b62d0eeecaef694b26e6361\" returns successfully" Feb 13 23:46:40.659331 kubelet[2736]: I0213 23:46:40.656187 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c8w4w" podStartSLOduration=4.6542679 podStartE2EDuration="4.6542679s" podCreationTimestamp="2025-02-13 23:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 23:46:37.657370991 +0000 UTC m=+17.322986365" watchObservedRunningTime="2025-02-13 23:46:40.6542679 +0000 UTC m=+20.319883263" Feb 13 23:46:40.660595 kubelet[2736]: I0213 23:46:40.659738 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-vgcgb" podStartSLOduration=1.127181875 podStartE2EDuration="4.65968915s" podCreationTimestamp="2025-02-13 23:46:36 +0000 UTC" firstStartedPulling="2025-02-13 23:46:36.917613773 +0000 UTC m=+16.583229111" lastFinishedPulling="2025-02-13 23:46:40.450121031 +0000 UTC m=+20.115736386" observedRunningTime="2025-02-13 23:46:40.659514879 +0000 UTC m=+20.325130222" watchObservedRunningTime="2025-02-13 23:46:40.65968915 +0000 UTC m=+20.325304529" Feb 13 23:46:44.248060 kubelet[2736]: I0213 23:46:44.247899 2736 topology_manager.go:215] "Topology Admit Handler" podUID="bd5c4459-63b0-43c7-8293-43cf060c2dee" podNamespace="calico-system" podName="calico-typha-5c79488846-lcj62" Feb 13 23:46:44.262442 systemd[1]: Created slice kubepods-besteffort-podbd5c4459_63b0_43c7_8293_43cf060c2dee.slice - libcontainer container kubepods-besteffort-podbd5c4459_63b0_43c7_8293_43cf060c2dee.slice. Feb 13 23:46:44.265428 kubelet[2736]: I0213 23:46:44.265383 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd5c4459-63b0-43c7-8293-43cf060c2dee-tigera-ca-bundle\") pod \"calico-typha-5c79488846-lcj62\" (UID: \"bd5c4459-63b0-43c7-8293-43cf060c2dee\") " pod="calico-system/calico-typha-5c79488846-lcj62" Feb 13 23:46:44.265583 kubelet[2736]: I0213 23:46:44.265443 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzn66\" (UniqueName: \"kubernetes.io/projected/bd5c4459-63b0-43c7-8293-43cf060c2dee-kube-api-access-rzn66\") pod \"calico-typha-5c79488846-lcj62\" (UID: \"bd5c4459-63b0-43c7-8293-43cf060c2dee\") " pod="calico-system/calico-typha-5c79488846-lcj62" Feb 13 23:46:44.265583 kubelet[2736]: I0213 23:46:44.265482 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bd5c4459-63b0-43c7-8293-43cf060c2dee-typha-certs\") pod \"calico-typha-5c79488846-lcj62\" (UID: \"bd5c4459-63b0-43c7-8293-43cf060c2dee\") " pod="calico-system/calico-typha-5c79488846-lcj62" Feb 13 23:46:44.432515 kubelet[2736]: I0213 23:46:44.432455 2736 topology_manager.go:215] "Topology Admit Handler" podUID="05783007-a592-4fb5-94a5-329eb3da307a" podNamespace="calico-system" podName="calico-node-tvnmz" Feb 13 23:46:44.448370 systemd[1]: Created slice kubepods-besteffort-pod05783007_a592_4fb5_94a5_329eb3da307a.slice - libcontainer container kubepods-besteffort-pod05783007_a592_4fb5_94a5_329eb3da307a.slice. Feb 13 23:46:44.467751 kubelet[2736]: I0213 23:46:44.467654 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05783007-a592-4fb5-94a5-329eb3da307a-xtables-lock\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.467751 kubelet[2736]: I0213 23:46:44.467722 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05783007-a592-4fb5-94a5-329eb3da307a-tigera-ca-bundle\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.468019 kubelet[2736]: I0213 23:46:44.467763 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05783007-a592-4fb5-94a5-329eb3da307a-var-lib-calico\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.468019 kubelet[2736]: I0213 23:46:44.467795 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05783007-a592-4fb5-94a5-329eb3da307a-lib-modules\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.468019 kubelet[2736]: I0213 23:46:44.467824 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/05783007-a592-4fb5-94a5-329eb3da307a-cni-bin-dir\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.468019 kubelet[2736]: I0213 23:46:44.467879 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/05783007-a592-4fb5-94a5-329eb3da307a-cni-net-dir\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.468019 kubelet[2736]: I0213 23:46:44.467913 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2lh\" (UniqueName: \"kubernetes.io/projected/05783007-a592-4fb5-94a5-329eb3da307a-kube-api-access-kf2lh\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.468381 kubelet[2736]: I0213 23:46:44.467944 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/05783007-a592-4fb5-94a5-329eb3da307a-cni-log-dir\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.468381 kubelet[2736]: I0213 23:46:44.467975 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/05783007-a592-4fb5-94a5-329eb3da307a-flexvol-driver-host\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.468381 kubelet[2736]: I0213 23:46:44.468006 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/05783007-a592-4fb5-94a5-329eb3da307a-node-certs\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.468381 kubelet[2736]: I0213 23:46:44.468037 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/05783007-a592-4fb5-94a5-329eb3da307a-policysync\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.468381 kubelet[2736]: I0213 23:46:44.468065 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/05783007-a592-4fb5-94a5-329eb3da307a-var-run-calico\") pod \"calico-node-tvnmz\" (UID: \"05783007-a592-4fb5-94a5-329eb3da307a\") " pod="calico-system/calico-node-tvnmz" Feb 13 23:46:44.575161 kubelet[2736]: E0213 23:46:44.573006 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.575161 kubelet[2736]: W0213 23:46:44.573069 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.575161 kubelet[2736]: E0213 23:46:44.573165 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.576727 containerd[1511]: time="2025-02-13T23:46:44.576506607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c79488846-lcj62,Uid:bd5c4459-63b0-43c7-8293-43cf060c2dee,Namespace:calico-system,Attempt:0,}" Feb 13 23:46:44.587470 kubelet[2736]: E0213 23:46:44.587328 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.587470 kubelet[2736]: W0213 23:46:44.587369 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.587470 kubelet[2736]: E0213 23:46:44.587398 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.588161 kubelet[2736]: I0213 23:46:44.587722 2736 topology_manager.go:215] "Topology Admit Handler" podUID="d1e82caf-91d2-4fb9-9ee6-89b2d78222a5" podNamespace="calico-system" podName="csi-node-driver-rwh7r" Feb 13 23:46:44.588161 kubelet[2736]: E0213 23:46:44.588098 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwh7r" podUID="d1e82caf-91d2-4fb9-9ee6-89b2d78222a5" Feb 13 23:46:44.625240 kubelet[2736]: E0213 23:46:44.624185 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.625240 kubelet[2736]: W0213 23:46:44.624217 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.625240 kubelet[2736]: E0213 23:46:44.624389 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.663793 kubelet[2736]: E0213 23:46:44.663327 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.663793 kubelet[2736]: W0213 23:46:44.663378 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.663793 kubelet[2736]: E0213 23:46:44.663451 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.665094 kubelet[2736]: E0213 23:46:44.664706 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.665094 kubelet[2736]: W0213 23:46:44.664745 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.665094 kubelet[2736]: E0213 23:46:44.664765 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.666504 kubelet[2736]: E0213 23:46:44.665964 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.666504 kubelet[2736]: W0213 23:46:44.666364 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.666504 kubelet[2736]: E0213 23:46:44.666434 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.668885 kubelet[2736]: E0213 23:46:44.668696 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.668885 kubelet[2736]: W0213 23:46:44.668740 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.668885 kubelet[2736]: E0213 23:46:44.668761 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.669532 kubelet[2736]: E0213 23:46:44.669339 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.669532 kubelet[2736]: W0213 23:46:44.669378 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.669532 kubelet[2736]: E0213 23:46:44.669398 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.670255 kubelet[2736]: E0213 23:46:44.669972 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.670255 kubelet[2736]: W0213 23:46:44.670011 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.670255 kubelet[2736]: E0213 23:46:44.670031 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.672768 kubelet[2736]: E0213 23:46:44.670598 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.672768 kubelet[2736]: W0213 23:46:44.670618 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.672768 kubelet[2736]: E0213 23:46:44.672572 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.673586 kubelet[2736]: E0213 23:46:44.673307 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.673586 kubelet[2736]: W0213 23:46:44.673357 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.673586 kubelet[2736]: E0213 23:46:44.673383 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.674003 kubelet[2736]: E0213 23:46:44.673933 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.674003 kubelet[2736]: W0213 23:46:44.673955 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.674003 kubelet[2736]: E0213 23:46:44.673974 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.675022 kubelet[2736]: E0213 23:46:44.674728 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.675022 kubelet[2736]: W0213 23:46:44.674750 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.675022 kubelet[2736]: E0213 23:46:44.674770 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.677554 kubelet[2736]: E0213 23:46:44.677388 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.677554 kubelet[2736]: W0213 23:46:44.677433 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.677554 kubelet[2736]: E0213 23:46:44.677456 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.678303 kubelet[2736]: E0213 23:46:44.678040 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.678303 kubelet[2736]: W0213 23:46:44.678061 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.678303 kubelet[2736]: E0213 23:46:44.678080 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.679019 kubelet[2736]: E0213 23:46:44.678648 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.679019 kubelet[2736]: W0213 23:46:44.678665 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.679019 kubelet[2736]: E0213 23:46:44.678683 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.680624 kubelet[2736]: E0213 23:46:44.679744 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.680624 kubelet[2736]: W0213 23:46:44.679770 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.680624 kubelet[2736]: E0213 23:46:44.679791 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.681147 kubelet[2736]: E0213 23:46:44.680976 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.681147 kubelet[2736]: W0213 23:46:44.680997 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.681147 kubelet[2736]: E0213 23:46:44.681019 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.682353 kubelet[2736]: E0213 23:46:44.682279 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.682353 kubelet[2736]: W0213 23:46:44.682302 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.682353 kubelet[2736]: E0213 23:46:44.682322 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.684176 kubelet[2736]: E0213 23:46:44.683920 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.684176 kubelet[2736]: W0213 23:46:44.683943 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.684176 kubelet[2736]: E0213 23:46:44.683963 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.684801 kubelet[2736]: E0213 23:46:44.684475 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.684801 kubelet[2736]: W0213 23:46:44.684496 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.684801 kubelet[2736]: E0213 23:46:44.684514 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.686538 kubelet[2736]: E0213 23:46:44.686372 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.686538 kubelet[2736]: W0213 23:46:44.686396 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.686538 kubelet[2736]: E0213 23:46:44.686416 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.687242 kubelet[2736]: E0213 23:46:44.686868 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.687242 kubelet[2736]: W0213 23:46:44.686886 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.687242 kubelet[2736]: E0213 23:46:44.686903 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.688558 kubelet[2736]: E0213 23:46:44.688319 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.688558 kubelet[2736]: W0213 23:46:44.688342 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.688558 kubelet[2736]: E0213 23:46:44.688362 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.688558 kubelet[2736]: I0213 23:46:44.688407 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d1e82caf-91d2-4fb9-9ee6-89b2d78222a5-varrun\") pod \"csi-node-driver-rwh7r\" (UID: \"d1e82caf-91d2-4fb9-9ee6-89b2d78222a5\") " pod="calico-system/csi-node-driver-rwh7r" Feb 13 23:46:44.690271 kubelet[2736]: E0213 23:46:44.690105 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.690271 kubelet[2736]: W0213 23:46:44.690133 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.690271 kubelet[2736]: E0213 23:46:44.690166 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.690271 kubelet[2736]: I0213 23:46:44.690196 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1e82caf-91d2-4fb9-9ee6-89b2d78222a5-kubelet-dir\") pod \"csi-node-driver-rwh7r\" (UID: \"d1e82caf-91d2-4fb9-9ee6-89b2d78222a5\") " pod="calico-system/csi-node-driver-rwh7r" Feb 13 23:46:44.690739 kubelet[2736]: E0213 23:46:44.690676 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.690739 kubelet[2736]: W0213 23:46:44.690725 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.691084 kubelet[2736]: E0213 23:46:44.690758 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.692458 kubelet[2736]: E0213 23:46:44.692429 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.692458 kubelet[2736]: W0213 23:46:44.692453 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.692797 kubelet[2736]: E0213 23:46:44.692481 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.692878 kubelet[2736]: E0213 23:46:44.692834 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.692878 kubelet[2736]: W0213 23:46:44.692860 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.692878 kubelet[2736]: E0213 23:46:44.692887 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.693152 kubelet[2736]: I0213 23:46:44.692919 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k7mb\" (UniqueName: \"kubernetes.io/projected/d1e82caf-91d2-4fb9-9ee6-89b2d78222a5-kube-api-access-9k7mb\") pod \"csi-node-driver-rwh7r\" (UID: \"d1e82caf-91d2-4fb9-9ee6-89b2d78222a5\") " pod="calico-system/csi-node-driver-rwh7r" Feb 13 23:46:44.694809 kubelet[2736]: E0213 23:46:44.694477 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.694809 kubelet[2736]: W0213 23:46:44.694504 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.694809 kubelet[2736]: E0213 23:46:44.694581 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.694809 kubelet[2736]: I0213 23:46:44.694626 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d1e82caf-91d2-4fb9-9ee6-89b2d78222a5-registration-dir\") pod \"csi-node-driver-rwh7r\" (UID: \"d1e82caf-91d2-4fb9-9ee6-89b2d78222a5\") " pod="calico-system/csi-node-driver-rwh7r" Feb 13 23:46:44.695663 kubelet[2736]: E0213 23:46:44.694872 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.695663 kubelet[2736]: W0213 23:46:44.694900 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.695663 kubelet[2736]: E0213 23:46:44.695218 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.695663 kubelet[2736]: W0213 23:46:44.695234 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.696946 kubelet[2736]: E0213 23:46:44.696430 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.696946 kubelet[2736]: W0213 23:46:44.696454 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.696946 kubelet[2736]: E0213 23:46:44.696475 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.696946 kubelet[2736]: E0213 23:46:44.696490 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.696946 kubelet[2736]: I0213 23:46:44.696505 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d1e82caf-91d2-4fb9-9ee6-89b2d78222a5-socket-dir\") pod \"csi-node-driver-rwh7r\" (UID: \"d1e82caf-91d2-4fb9-9ee6-89b2d78222a5\") " pod="calico-system/csi-node-driver-rwh7r" Feb 13 23:46:44.696946 kubelet[2736]: E0213 23:46:44.696476 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.698655 kubelet[2736]: E0213 23:46:44.698357 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.698655 kubelet[2736]: W0213 23:46:44.698378 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.700280 kubelet[2736]: E0213 23:46:44.698952 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.700280 kubelet[2736]: E0213 23:46:44.699079 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.700280 kubelet[2736]: W0213 23:46:44.699095 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.700280 kubelet[2736]: E0213 23:46:44.699128 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.700983 kubelet[2736]: E0213 23:46:44.700771 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.700983 kubelet[2736]: W0213 23:46:44.700795 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.700983 kubelet[2736]: E0213 23:46:44.700816 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.703296 kubelet[2736]: E0213 23:46:44.701784 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.703296 kubelet[2736]: W0213 23:46:44.701808 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.703296 kubelet[2736]: E0213 23:46:44.701828 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.704096 kubelet[2736]: E0213 23:46:44.703887 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.704096 kubelet[2736]: W0213 23:46:44.703914 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.704096 kubelet[2736]: E0213 23:46:44.703942 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.706528 kubelet[2736]: E0213 23:46:44.706360 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.706528 kubelet[2736]: W0213 23:46:44.706386 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.706528 kubelet[2736]: E0213 23:46:44.706406 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.726216 containerd[1511]: time="2025-02-13T23:46:44.725081576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:46:44.726636 containerd[1511]: time="2025-02-13T23:46:44.725189133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:46:44.728346 containerd[1511]: time="2025-02-13T23:46:44.728144381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:44.730271 containerd[1511]: time="2025-02-13T23:46:44.730094277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:44.757084 containerd[1511]: time="2025-02-13T23:46:44.756947458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tvnmz,Uid:05783007-a592-4fb5-94a5-329eb3da307a,Namespace:calico-system,Attempt:0,}" Feb 13 23:46:44.803998 kubelet[2736]: E0213 23:46:44.803570 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.803998 kubelet[2736]: W0213 23:46:44.803603 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.803998 kubelet[2736]: E0213 23:46:44.803632 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.806133 kubelet[2736]: E0213 23:46:44.805631 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.806133 kubelet[2736]: W0213 23:46:44.805655 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.806133 kubelet[2736]: E0213 23:46:44.805688 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.807003 kubelet[2736]: E0213 23:46:44.806785 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.807003 kubelet[2736]: W0213 23:46:44.806808 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.807003 kubelet[2736]: E0213 23:46:44.806839 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.807796 kubelet[2736]: E0213 23:46:44.807454 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.807796 kubelet[2736]: W0213 23:46:44.807474 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.807796 kubelet[2736]: E0213 23:46:44.807569 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.809385 kubelet[2736]: E0213 23:46:44.809082 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.809385 kubelet[2736]: W0213 23:46:44.809105 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.809385 kubelet[2736]: E0213 23:46:44.809236 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.812102 kubelet[2736]: E0213 23:46:44.811172 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.812102 kubelet[2736]: W0213 23:46:44.811218 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.812102 kubelet[2736]: E0213 23:46:44.811532 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.812102 kubelet[2736]: W0213 23:46:44.811547 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.812102 kubelet[2736]: E0213 23:46:44.811784 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.812102 kubelet[2736]: W0213 23:46:44.811799 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.812102 kubelet[2736]: E0213 23:46:44.812051 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.812102 kubelet[2736]: W0213 23:46:44.812066 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.813646 kubelet[2736]: E0213 23:46:44.812987 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.813646 kubelet[2736]: W0213 23:46:44.813005 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.813646 kubelet[2736]: E0213 23:46:44.813023 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.813646 kubelet[2736]: E0213 23:46:44.813312 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.813646 kubelet[2736]: W0213 23:46:44.813328 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.813646 kubelet[2736]: E0213 23:46:44.813345 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.814928 kubelet[2736]: E0213 23:46:44.814472 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.814928 kubelet[2736]: W0213 23:46:44.814496 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.814928 kubelet[2736]: E0213 23:46:44.814516 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.817420 kubelet[2736]: E0213 23:46:44.817218 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.817464 systemd[1]: Started cri-containerd-93e1920e74f7af2e478279b5835ec13571fc25b1dea549986eb0cc098076c29e.scope - libcontainer container 93e1920e74f7af2e478279b5835ec13571fc25b1dea549986eb0cc098076c29e. Feb 13 23:46:44.818191 kubelet[2736]: E0213 23:46:44.817919 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.818191 kubelet[2736]: W0213 23:46:44.817942 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.818191 kubelet[2736]: E0213 23:46:44.817963 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.818892 kubelet[2736]: E0213 23:46:44.818669 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.818892 kubelet[2736]: W0213 23:46:44.818691 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.818892 kubelet[2736]: E0213 23:46:44.818710 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.821647 kubelet[2736]: E0213 23:46:44.821402 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.821647 kubelet[2736]: W0213 23:46:44.821427 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.821647 kubelet[2736]: E0213 23:46:44.821448 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.821647 kubelet[2736]: E0213 23:46:44.821483 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.822751 kubelet[2736]: E0213 23:46:44.822307 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.822751 kubelet[2736]: W0213 23:46:44.822329 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.822751 kubelet[2736]: E0213 23:46:44.822348 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.822751 kubelet[2736]: E0213 23:46:44.822594 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.822751 kubelet[2736]: W0213 23:46:44.822608 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.822751 kubelet[2736]: E0213 23:46:44.822625 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.824158 kubelet[2736]: E0213 23:46:44.823463 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.824158 kubelet[2736]: E0213 23:46:44.823540 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.824158 kubelet[2736]: E0213 23:46:44.823906 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.824158 kubelet[2736]: W0213 23:46:44.823922 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.824158 kubelet[2736]: E0213 23:46:44.823949 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.826782 kubelet[2736]: E0213 23:46:44.824709 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.826782 kubelet[2736]: W0213 23:46:44.824740 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.826782 kubelet[2736]: E0213 23:46:44.824769 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.829636 kubelet[2736]: E0213 23:46:44.828976 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.829636 kubelet[2736]: W0213 23:46:44.829002 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.830361 kubelet[2736]: E0213 23:46:44.829945 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.832491 kubelet[2736]: E0213 23:46:44.829960 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.832491 kubelet[2736]: W0213 23:46:44.832308 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.832491 kubelet[2736]: E0213 23:46:44.832348 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.833726 kubelet[2736]: E0213 23:46:44.832979 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.833726 kubelet[2736]: W0213 23:46:44.832999 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.833726 kubelet[2736]: E0213 23:46:44.833500 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.836205 kubelet[2736]: E0213 23:46:44.835040 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.836205 kubelet[2736]: W0213 23:46:44.835063 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.836205 kubelet[2736]: E0213 23:46:44.835291 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.844292 kubelet[2736]: E0213 23:46:44.841068 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.844292 kubelet[2736]: W0213 23:46:44.841106 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.844292 kubelet[2736]: E0213 23:46:44.842308 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.865327 kubelet[2736]: E0213 23:46:44.845376 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.865327 kubelet[2736]: W0213 23:46:44.845396 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.865327 kubelet[2736]: E0213 23:46:44.845533 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.865529 containerd[1511]: time="2025-02-13T23:46:44.855958553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:46:44.865529 containerd[1511]: time="2025-02-13T23:46:44.856109093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:46:44.865529 containerd[1511]: time="2025-02-13T23:46:44.856207108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:44.865529 containerd[1511]: time="2025-02-13T23:46:44.856636333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:46:44.884274 kubelet[2736]: E0213 23:46:44.884146 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 23:46:44.884274 kubelet[2736]: W0213 23:46:44.884177 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 23:46:44.884274 kubelet[2736]: E0213 23:46:44.884204 2736 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 23:46:44.921460 systemd[1]: Started cri-containerd-abf7a5ca0fb7e2638265b7fab23f231b60053c1cd9bb9a158da990e959d8d58b.scope - libcontainer container abf7a5ca0fb7e2638265b7fab23f231b60053c1cd9bb9a158da990e959d8d58b. Feb 13 23:46:44.992930 containerd[1511]: time="2025-02-13T23:46:44.992328528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tvnmz,Uid:05783007-a592-4fb5-94a5-329eb3da307a,Namespace:calico-system,Attempt:0,} returns sandbox id \"abf7a5ca0fb7e2638265b7fab23f231b60053c1cd9bb9a158da990e959d8d58b\"" Feb 13 23:46:44.994718 containerd[1511]: time="2025-02-13T23:46:44.994634700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 23:46:45.033279 containerd[1511]: time="2025-02-13T23:46:45.033181483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c79488846-lcj62,Uid:bd5c4459-63b0-43c7-8293-43cf060c2dee,Namespace:calico-system,Attempt:0,} returns sandbox id \"93e1920e74f7af2e478279b5835ec13571fc25b1dea549986eb0cc098076c29e\"" Feb 13 23:46:46.542992 kubelet[2736]: E0213 23:46:46.542926 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwh7r" podUID="d1e82caf-91d2-4fb9-9ee6-89b2d78222a5" Feb 13 23:46:46.667662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638123863.mount: Deactivated successfully. Feb 13 23:46:46.851396 containerd[1511]: time="2025-02-13T23:46:46.851094282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:46.855015 containerd[1511]: time="2025-02-13T23:46:46.854928123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 23:46:46.857511 containerd[1511]: time="2025-02-13T23:46:46.857431523Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:46.861286 containerd[1511]: time="2025-02-13T23:46:46.860914412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:46.862640 containerd[1511]: time="2025-02-13T23:46:46.862004564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.867319283s" Feb 13 23:46:46.862640 containerd[1511]: time="2025-02-13T23:46:46.862054477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 23:46:46.865684 containerd[1511]: time="2025-02-13T23:46:46.865646432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 23:46:46.866911 containerd[1511]: time="2025-02-13T23:46:46.866861048Z" level=info msg="CreateContainer within sandbox \"abf7a5ca0fb7e2638265b7fab23f231b60053c1cd9bb9a158da990e959d8d58b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 23:46:46.893494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66789227.mount: Deactivated successfully. Feb 13 23:46:46.901355 containerd[1511]: time="2025-02-13T23:46:46.901117945Z" level=info msg="CreateContainer within sandbox \"abf7a5ca0fb7e2638265b7fab23f231b60053c1cd9bb9a158da990e959d8d58b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"023bda48e627d8eb8d8c4864b8fd6fb70922bad8b7df9fa415ec7ef7511a8c31\"" Feb 13 23:46:46.903095 containerd[1511]: time="2025-02-13T23:46:46.903053858Z" level=info msg="StartContainer for \"023bda48e627d8eb8d8c4864b8fd6fb70922bad8b7df9fa415ec7ef7511a8c31\"" Feb 13 23:46:46.944459 systemd[1]: Started cri-containerd-023bda48e627d8eb8d8c4864b8fd6fb70922bad8b7df9fa415ec7ef7511a8c31.scope - libcontainer container 023bda48e627d8eb8d8c4864b8fd6fb70922bad8b7df9fa415ec7ef7511a8c31. Feb 13 23:46:46.992277 containerd[1511]: time="2025-02-13T23:46:46.992191725Z" level=info msg="StartContainer for \"023bda48e627d8eb8d8c4864b8fd6fb70922bad8b7df9fa415ec7ef7511a8c31\" returns successfully" Feb 13 23:46:47.014308 systemd[1]: cri-containerd-023bda48e627d8eb8d8c4864b8fd6fb70922bad8b7df9fa415ec7ef7511a8c31.scope: Deactivated successfully. Feb 13 23:46:47.174924 containerd[1511]: time="2025-02-13T23:46:47.162313124Z" level=info msg="shim disconnected" id=023bda48e627d8eb8d8c4864b8fd6fb70922bad8b7df9fa415ec7ef7511a8c31 namespace=k8s.io Feb 13 23:46:47.174924 containerd[1511]: time="2025-02-13T23:46:47.174816244Z" level=warning msg="cleaning up after shim disconnected" id=023bda48e627d8eb8d8c4864b8fd6fb70922bad8b7df9fa415ec7ef7511a8c31 namespace=k8s.io Feb 13 23:46:47.174924 containerd[1511]: time="2025-02-13T23:46:47.174848094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:46:47.598302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-023bda48e627d8eb8d8c4864b8fd6fb70922bad8b7df9fa415ec7ef7511a8c31-rootfs.mount: Deactivated successfully. Feb 13 23:46:48.542324 kubelet[2736]: E0213 23:46:48.542188 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwh7r" podUID="d1e82caf-91d2-4fb9-9ee6-89b2d78222a5" Feb 13 23:46:49.990269 containerd[1511]: time="2025-02-13T23:46:49.990188060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:49.994092 containerd[1511]: time="2025-02-13T23:46:49.993941400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 23:46:49.996863 containerd[1511]: time="2025-02-13T23:46:49.996798175Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:50.003608 containerd[1511]: time="2025-02-13T23:46:50.003415753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:50.005617 containerd[1511]: time="2025-02-13T23:46:50.005388579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.13969381s" Feb 13 23:46:50.005617 containerd[1511]: time="2025-02-13T23:46:50.005460255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 23:46:50.007779 containerd[1511]: time="2025-02-13T23:46:50.007502997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 23:46:50.025914 containerd[1511]: time="2025-02-13T23:46:50.025620060Z" level=info msg="CreateContainer within sandbox \"93e1920e74f7af2e478279b5835ec13571fc25b1dea549986eb0cc098076c29e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 23:46:50.053454 containerd[1511]: time="2025-02-13T23:46:50.053393416Z" level=info msg="CreateContainer within sandbox \"93e1920e74f7af2e478279b5835ec13571fc25b1dea549986eb0cc098076c29e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d121f46e00fb4ff94281c1a75b6a857774084b5eec01f961848bef0147d50027\"" Feb 13 23:46:50.055975 containerd[1511]: time="2025-02-13T23:46:50.054351523Z" level=info msg="StartContainer for \"d121f46e00fb4ff94281c1a75b6a857774084b5eec01f961848bef0147d50027\"" Feb 13 23:46:50.105603 systemd[1]: Started cri-containerd-d121f46e00fb4ff94281c1a75b6a857774084b5eec01f961848bef0147d50027.scope - libcontainer container d121f46e00fb4ff94281c1a75b6a857774084b5eec01f961848bef0147d50027. Feb 13 23:46:50.180579 containerd[1511]: time="2025-02-13T23:46:50.180521345Z" level=info msg="StartContainer for \"d121f46e00fb4ff94281c1a75b6a857774084b5eec01f961848bef0147d50027\" returns successfully" Feb 13 23:46:50.542953 kubelet[2736]: E0213 23:46:50.541987 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwh7r" podUID="d1e82caf-91d2-4fb9-9ee6-89b2d78222a5" Feb 13 23:46:50.696001 kubelet[2736]: I0213 23:46:50.695347 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c79488846-lcj62" podStartSLOduration=1.723302685 podStartE2EDuration="6.695324353s" podCreationTimestamp="2025-02-13 23:46:44 +0000 UTC" firstStartedPulling="2025-02-13 23:46:45.035146839 +0000 UTC m=+24.700762177" lastFinishedPulling="2025-02-13 23:46:50.007168495 +0000 UTC m=+29.672783845" observedRunningTime="2025-02-13 23:46:50.694107785 +0000 UTC m=+30.359723137" watchObservedRunningTime="2025-02-13 23:46:50.695324353 +0000 UTC m=+30.360939707" Feb 13 23:46:51.688964 kubelet[2736]: I0213 23:46:51.688900 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 23:46:52.541227 kubelet[2736]: E0213 23:46:52.541124 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwh7r" podUID="d1e82caf-91d2-4fb9-9ee6-89b2d78222a5" Feb 13 23:46:54.544367 kubelet[2736]: E0213 23:46:54.542751 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwh7r" podUID="d1e82caf-91d2-4fb9-9ee6-89b2d78222a5" Feb 13 23:46:56.541164 kubelet[2736]: E0213 23:46:56.540989 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwh7r" podUID="d1e82caf-91d2-4fb9-9ee6-89b2d78222a5" Feb 13 23:46:56.574913 containerd[1511]: time="2025-02-13T23:46:56.574450197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:56.576530 containerd[1511]: time="2025-02-13T23:46:56.576207059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 23:46:56.577659 containerd[1511]: time="2025-02-13T23:46:56.577579106Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:56.581217 containerd[1511]: time="2025-02-13T23:46:56.581130656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:46:56.583314 containerd[1511]: time="2025-02-13T23:46:56.582482006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.574932502s" Feb 13 23:46:56.583314 containerd[1511]: time="2025-02-13T23:46:56.582551024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 23:46:56.585854 containerd[1511]: time="2025-02-13T23:46:56.585808316Z" level=info msg="CreateContainer within sandbox \"abf7a5ca0fb7e2638265b7fab23f231b60053c1cd9bb9a158da990e959d8d58b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 23:46:56.623019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580453079.mount: Deactivated successfully. Feb 13 23:46:56.627214 containerd[1511]: time="2025-02-13T23:46:56.626962385Z" level=info msg="CreateContainer within sandbox \"abf7a5ca0fb7e2638265b7fab23f231b60053c1cd9bb9a158da990e959d8d58b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"db0f68bb64e381db97865dd8a727af737d3d86a517eb9972a7722c2e5381d680\"" Feb 13 23:46:56.628398 containerd[1511]: time="2025-02-13T23:46:56.628357771Z" level=info msg="StartContainer for \"db0f68bb64e381db97865dd8a727af737d3d86a517eb9972a7722c2e5381d680\"" Feb 13 23:46:56.705086 systemd[1]: Started cri-containerd-db0f68bb64e381db97865dd8a727af737d3d86a517eb9972a7722c2e5381d680.scope - libcontainer container db0f68bb64e381db97865dd8a727af737d3d86a517eb9972a7722c2e5381d680. Feb 13 23:46:56.767092 containerd[1511]: time="2025-02-13T23:46:56.767026540Z" level=info msg="StartContainer for \"db0f68bb64e381db97865dd8a727af737d3d86a517eb9972a7722c2e5381d680\" returns successfully" Feb 13 23:46:57.769651 systemd[1]: cri-containerd-db0f68bb64e381db97865dd8a727af737d3d86a517eb9972a7722c2e5381d680.scope: Deactivated successfully. Feb 13 23:46:57.825499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db0f68bb64e381db97865dd8a727af737d3d86a517eb9972a7722c2e5381d680-rootfs.mount: Deactivated successfully. Feb 13 23:46:57.981154 kubelet[2736]: I0213 23:46:57.980845 2736 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 23:46:57.982866 containerd[1511]: time="2025-02-13T23:46:57.982046777Z" level=info msg="shim disconnected" id=db0f68bb64e381db97865dd8a727af737d3d86a517eb9972a7722c2e5381d680 namespace=k8s.io Feb 13 23:46:57.982866 containerd[1511]: time="2025-02-13T23:46:57.982147703Z" level=warning msg="cleaning up after shim disconnected" id=db0f68bb64e381db97865dd8a727af737d3d86a517eb9972a7722c2e5381d680 namespace=k8s.io Feb 13 23:46:57.982866 containerd[1511]: time="2025-02-13T23:46:57.982165539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:46:58.038599 kubelet[2736]: I0213 23:46:58.038541 2736 topology_manager.go:215] "Topology Admit Handler" podUID="86f73b1f-c829-49fc-b6e2-286cf7bba006" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zfpxh" Feb 13 23:46:58.043278 kubelet[2736]: I0213 23:46:58.041916 2736 topology_manager.go:215] "Topology Admit Handler" podUID="96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xsvvc" Feb 13 23:46:58.043817 kubelet[2736]: I0213 23:46:58.043781 2736 topology_manager.go:215] "Topology Admit Handler" podUID="a5485d8a-381a-4e03-acb1-c089decfc6bb" podNamespace="calico-system" podName="calico-kube-controllers-655c684b7f-hp2zr" Feb 13 23:46:58.048155 kubelet[2736]: I0213 23:46:58.048060 2736 topology_manager.go:215] "Topology Admit Handler" podUID="a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40" podNamespace="calico-apiserver" podName="calico-apiserver-66d4b7ccb4-trqhb" Feb 13 23:46:58.051278 kubelet[2736]: I0213 23:46:58.051008 2736 topology_manager.go:215] "Topology Admit Handler" podUID="6dcdb1d9-abfe-415f-add7-16a0f07fc6fe" podNamespace="calico-apiserver" podName="calico-apiserver-66d4b7ccb4-g9d5d" Feb 13 23:46:58.055164 systemd[1]: Created slice kubepods-burstable-pod86f73b1f_c829_49fc_b6e2_286cf7bba006.slice - libcontainer container kubepods-burstable-pod86f73b1f_c829_49fc_b6e2_286cf7bba006.slice. Feb 13 23:46:58.059730 kubelet[2736]: W0213 23:46:58.058941 2736 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-gs5j1.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-gs5j1.gb1.brightbox.com' and this object Feb 13 23:46:58.059730 kubelet[2736]: E0213 23:46:58.059576 2736 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-gs5j1.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-gs5j1.gb1.brightbox.com' and this object Feb 13 23:46:58.060493 kubelet[2736]: W0213 23:46:58.060290 2736 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:srv-gs5j1.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-gs5j1.gb1.brightbox.com' and this object Feb 13 23:46:58.060493 kubelet[2736]: E0213 23:46:58.060322 2736 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:srv-gs5j1.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-gs5j1.gb1.brightbox.com' and this object Feb 13 23:46:58.075706 systemd[1]: Created slice kubepods-burstable-pod96208589_c8c3_4cfc_a4f9_c1d2e0b2d2ee.slice - libcontainer container kubepods-burstable-pod96208589_c8c3_4cfc_a4f9_c1d2e0b2d2ee.slice. Feb 13 23:46:58.092994 systemd[1]: Created slice kubepods-besteffort-poda5485d8a_381a_4e03_acb1_c089decfc6bb.slice - libcontainer container kubepods-besteffort-poda5485d8a_381a_4e03_acb1_c089decfc6bb.slice. Feb 13 23:46:58.111726 systemd[1]: Created slice kubepods-besteffort-poda373ede0_5d5e_4f23_95b8_1a3f2d2a9b40.slice - libcontainer container kubepods-besteffort-poda373ede0_5d5e_4f23_95b8_1a3f2d2a9b40.slice. Feb 13 23:46:58.127299 systemd[1]: Created slice kubepods-besteffort-pod6dcdb1d9_abfe_415f_add7_16a0f07fc6fe.slice - libcontainer container kubepods-besteffort-pod6dcdb1d9_abfe_415f_add7_16a0f07fc6fe.slice. Feb 13 23:46:58.130435 kubelet[2736]: I0213 23:46:58.130039 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vnr\" (UniqueName: \"kubernetes.io/projected/a5485d8a-381a-4e03-acb1-c089decfc6bb-kube-api-access-47vnr\") pod \"calico-kube-controllers-655c684b7f-hp2zr\" (UID: \"a5485d8a-381a-4e03-acb1-c089decfc6bb\") " pod="calico-system/calico-kube-controllers-655c684b7f-hp2zr" Feb 13 23:46:58.130810 kubelet[2736]: I0213 23:46:58.130782 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5485d8a-381a-4e03-acb1-c089decfc6bb-tigera-ca-bundle\") pod \"calico-kube-controllers-655c684b7f-hp2zr\" (UID: \"a5485d8a-381a-4e03-acb1-c089decfc6bb\") " pod="calico-system/calico-kube-controllers-655c684b7f-hp2zr" Feb 13 23:46:58.131002 kubelet[2736]: I0213 23:46:58.130959 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4k82\" (UniqueName: \"kubernetes.io/projected/6dcdb1d9-abfe-415f-add7-16a0f07fc6fe-kube-api-access-w4k82\") pod \"calico-apiserver-66d4b7ccb4-g9d5d\" (UID: \"6dcdb1d9-abfe-415f-add7-16a0f07fc6fe\") " pod="calico-apiserver/calico-apiserver-66d4b7ccb4-g9d5d" Feb 13 23:46:58.131456 kubelet[2736]: I0213 23:46:58.131303 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfrb\" (UniqueName: \"kubernetes.io/projected/86f73b1f-c829-49fc-b6e2-286cf7bba006-kube-api-access-6qfrb\") pod \"coredns-7db6d8ff4d-zfpxh\" (UID: \"86f73b1f-c829-49fc-b6e2-286cf7bba006\") " pod="kube-system/coredns-7db6d8ff4d-zfpxh" Feb 13 23:46:58.131456 kubelet[2736]: I0213 23:46:58.131385 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6dcdb1d9-abfe-415f-add7-16a0f07fc6fe-calico-apiserver-certs\") pod \"calico-apiserver-66d4b7ccb4-g9d5d\" (UID: \"6dcdb1d9-abfe-415f-add7-16a0f07fc6fe\") " pod="calico-apiserver/calico-apiserver-66d4b7ccb4-g9d5d" Feb 13 23:46:58.131456 kubelet[2736]: I0213 23:46:58.131444 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86f73b1f-c829-49fc-b6e2-286cf7bba006-config-volume\") pod \"coredns-7db6d8ff4d-zfpxh\" (UID: \"86f73b1f-c829-49fc-b6e2-286cf7bba006\") " pod="kube-system/coredns-7db6d8ff4d-zfpxh" Feb 13 23:46:58.132190 kubelet[2736]: I0213 23:46:58.131476 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjln6\" (UniqueName: \"kubernetes.io/projected/96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee-kube-api-access-zjln6\") pod \"coredns-7db6d8ff4d-xsvvc\" (UID: \"96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee\") " pod="kube-system/coredns-7db6d8ff4d-xsvvc" Feb 13 23:46:58.132190 kubelet[2736]: I0213 23:46:58.131528 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee-config-volume\") pod \"coredns-7db6d8ff4d-xsvvc\" (UID: \"96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee\") " pod="kube-system/coredns-7db6d8ff4d-xsvvc" Feb 13 23:46:58.132190 kubelet[2736]: I0213 23:46:58.131595 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k6hq\" (UniqueName: \"kubernetes.io/projected/a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40-kube-api-access-4k6hq\") pod \"calico-apiserver-66d4b7ccb4-trqhb\" (UID: \"a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40\") " pod="calico-apiserver/calico-apiserver-66d4b7ccb4-trqhb" Feb 13 23:46:58.132190 kubelet[2736]: I0213 23:46:58.131631 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40-calico-apiserver-certs\") pod \"calico-apiserver-66d4b7ccb4-trqhb\" (UID: \"a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40\") " pod="calico-apiserver/calico-apiserver-66d4b7ccb4-trqhb" Feb 13 23:46:58.365241 containerd[1511]: time="2025-02-13T23:46:58.364949336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zfpxh,Uid:86f73b1f-c829-49fc-b6e2-286cf7bba006,Namespace:kube-system,Attempt:0,}" Feb 13 23:46:58.384373 containerd[1511]: time="2025-02-13T23:46:58.383613327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xsvvc,Uid:96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee,Namespace:kube-system,Attempt:0,}" Feb 13 23:46:58.406902 containerd[1511]: time="2025-02-13T23:46:58.406549601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655c684b7f-hp2zr,Uid:a5485d8a-381a-4e03-acb1-c089decfc6bb,Namespace:calico-system,Attempt:0,}" Feb 13 23:46:58.558898 systemd[1]: Created slice kubepods-besteffort-podd1e82caf_91d2_4fb9_9ee6_89b2d78222a5.slice - libcontainer container kubepods-besteffort-podd1e82caf_91d2_4fb9_9ee6_89b2d78222a5.slice. Feb 13 23:46:58.565606 containerd[1511]: time="2025-02-13T23:46:58.565490648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwh7r,Uid:d1e82caf-91d2-4fb9-9ee6-89b2d78222a5,Namespace:calico-system,Attempt:0,}" Feb 13 23:46:58.661574 containerd[1511]: time="2025-02-13T23:46:58.660950558Z" level=error msg="Failed to destroy network for sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.667040 containerd[1511]: time="2025-02-13T23:46:58.666656295Z" level=error msg="Failed to destroy network for sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.671858 containerd[1511]: time="2025-02-13T23:46:58.671807380Z" level=error msg="encountered an error cleaning up failed sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.672239 containerd[1511]: time="2025-02-13T23:46:58.672199381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xsvvc,Uid:96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.672515 containerd[1511]: time="2025-02-13T23:46:58.672456867Z" level=error msg="encountered an error cleaning up failed sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.672599 containerd[1511]: time="2025-02-13T23:46:58.672539100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zfpxh,Uid:86f73b1f-c829-49fc-b6e2-286cf7bba006,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.681080 containerd[1511]: time="2025-02-13T23:46:58.681037676Z" level=error msg="Failed to destroy network for sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.682139 containerd[1511]: time="2025-02-13T23:46:58.682100753Z" level=error msg="encountered an error cleaning up failed sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.682318 containerd[1511]: time="2025-02-13T23:46:58.682279919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655c684b7f-hp2zr,Uid:a5485d8a-381a-4e03-acb1-c089decfc6bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.686890 kubelet[2736]: E0213 23:46:58.681648 2736 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.687054 kubelet[2736]: E0213 23:46:58.681476 2736 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.687112 kubelet[2736]: E0213 23:46:58.687066 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xsvvc" Feb 13 23:46:58.687176 kubelet[2736]: E0213 23:46:58.687131 2736 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xsvvc" Feb 13 23:46:58.687231 kubelet[2736]: E0213 23:46:58.687201 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xsvvc_kube-system(96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xsvvc_kube-system(96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xsvvc" podUID="96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee" Feb 13 23:46:58.687595 kubelet[2736]: E0213 23:46:58.687376 2736 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.687595 kubelet[2736]: E0213 23:46:58.687480 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-655c684b7f-hp2zr" Feb 13 23:46:58.687595 kubelet[2736]: E0213 23:46:58.687586 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zfpxh" Feb 13 23:46:58.690334 kubelet[2736]: E0213 23:46:58.687528 2736 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-655c684b7f-hp2zr" Feb 13 23:46:58.690334 kubelet[2736]: E0213 23:46:58.687690 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-655c684b7f-hp2zr_calico-system(a5485d8a-381a-4e03-acb1-c089decfc6bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-655c684b7f-hp2zr_calico-system(a5485d8a-381a-4e03-acb1-c089decfc6bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-655c684b7f-hp2zr" podUID="a5485d8a-381a-4e03-acb1-c089decfc6bb" Feb 13 23:46:58.690334 kubelet[2736]: E0213 23:46:58.687614 2736 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zfpxh" Feb 13 23:46:58.690675 kubelet[2736]: E0213 23:46:58.688199 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zfpxh_kube-system(86f73b1f-c829-49fc-b6e2-286cf7bba006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zfpxh_kube-system(86f73b1f-c829-49fc-b6e2-286cf7bba006)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zfpxh" podUID="86f73b1f-c829-49fc-b6e2-286cf7bba006" Feb 13 23:46:58.723298 containerd[1511]: time="2025-02-13T23:46:58.723206766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 23:46:58.731565 kubelet[2736]: I0213 23:46:58.731511 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:46:58.735889 kubelet[2736]: I0213 23:46:58.735857 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:46:58.745773 containerd[1511]: time="2025-02-13T23:46:58.745708728Z" level=error msg="Failed to destroy network for sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.746644 containerd[1511]: time="2025-02-13T23:46:58.746587717Z" level=error msg="encountered an error cleaning up failed sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.746927 containerd[1511]: time="2025-02-13T23:46:58.746832863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwh7r,Uid:d1e82caf-91d2-4fb9-9ee6-89b2d78222a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.772758 kubelet[2736]: E0213 23:46:58.772262 2736 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.772758 kubelet[2736]: E0213 23:46:58.772333 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwh7r" Feb 13 23:46:58.772758 kubelet[2736]: E0213 23:46:58.772367 2736 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwh7r" Feb 13 23:46:58.773050 kubelet[2736]: E0213 23:46:58.772420 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rwh7r_calico-system(d1e82caf-91d2-4fb9-9ee6-89b2d78222a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rwh7r_calico-system(d1e82caf-91d2-4fb9-9ee6-89b2d78222a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwh7r" podUID="d1e82caf-91d2-4fb9-9ee6-89b2d78222a5" Feb 13 23:46:58.778278 kubelet[2736]: I0213 23:46:58.776608 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:46:58.801995 containerd[1511]: time="2025-02-13T23:46:58.801894897Z" level=info msg="StopPodSandbox for \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\"" Feb 13 23:46:58.809052 containerd[1511]: time="2025-02-13T23:46:58.808353483Z" level=info msg="StopPodSandbox for \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\"" Feb 13 23:46:58.809052 containerd[1511]: time="2025-02-13T23:46:58.808651343Z" level=info msg="Ensure that sandbox 90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad in task-service has been cleanup successfully" Feb 13 23:46:58.809363 containerd[1511]: time="2025-02-13T23:46:58.809331328Z" level=info msg="StopPodSandbox for \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\"" Feb 13 23:46:58.810866 containerd[1511]: time="2025-02-13T23:46:58.809730411Z" level=info msg="Ensure that sandbox 5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf in task-service has been cleanup successfully" Feb 13 23:46:58.817852 containerd[1511]: time="2025-02-13T23:46:58.817516077Z" level=info msg="Ensure that sandbox 175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af in task-service has been cleanup successfully" Feb 13 23:46:58.827391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf-shm.mount: Deactivated successfully. Feb 13 23:46:58.945424 containerd[1511]: time="2025-02-13T23:46:58.945030051Z" level=error msg="StopPodSandbox for \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\" failed" error="failed to destroy network for sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.947385 kubelet[2736]: E0213 23:46:58.945774 2736 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:46:58.947385 kubelet[2736]: E0213 23:46:58.945867 2736 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af"} Feb 13 23:46:58.947385 kubelet[2736]: E0213 23:46:58.945974 2736 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5485d8a-381a-4e03-acb1-c089decfc6bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 23:46:58.947385 kubelet[2736]: E0213 23:46:58.946010 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5485d8a-381a-4e03-acb1-c089decfc6bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-655c684b7f-hp2zr" podUID="a5485d8a-381a-4e03-acb1-c089decfc6bb" Feb 13 23:46:58.977551 containerd[1511]: time="2025-02-13T23:46:58.977409170Z" level=error msg="StopPodSandbox for \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\" failed" error="failed to destroy network for sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.978533 containerd[1511]: time="2025-02-13T23:46:58.978397229Z" level=error msg="StopPodSandbox for \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\" failed" error="failed to destroy network for sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:58.978968 kubelet[2736]: E0213 23:46:58.978379 2736 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:46:58.978968 kubelet[2736]: E0213 23:46:58.978580 2736 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad"} Feb 13 23:46:58.978968 kubelet[2736]: E0213 23:46:58.978688 2736 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 23:46:58.978968 kubelet[2736]: E0213 23:46:58.978756 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xsvvc" podUID="96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee" Feb 13 23:46:58.979417 kubelet[2736]: E0213 23:46:58.978866 2736 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:46:58.979417 kubelet[2736]: E0213 23:46:58.978910 2736 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf"} Feb 13 23:46:58.979417 kubelet[2736]: E0213 23:46:58.978953 2736 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86f73b1f-c829-49fc-b6e2-286cf7bba006\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 23:46:58.979417 kubelet[2736]: E0213 23:46:58.978994 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86f73b1f-c829-49fc-b6e2-286cf7bba006\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zfpxh" podUID="86f73b1f-c829-49fc-b6e2-286cf7bba006" Feb 13 23:46:59.234158 kubelet[2736]: E0213 23:46:59.233954 2736 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 23:46:59.234797 kubelet[2736]: E0213 23:46:59.234223 2736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40-calico-apiserver-certs podName:a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40 nodeName:}" failed. No retries permitted until 2025-02-13 23:46:59.734169545 +0000 UTC m=+39.399784883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40-calico-apiserver-certs") pod "calico-apiserver-66d4b7ccb4-trqhb" (UID: "a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40") : failed to sync secret cache: timed out waiting for the condition Feb 13 23:46:59.234903 kubelet[2736]: E0213 23:46:59.234889 2736 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 23:46:59.235804 kubelet[2736]: E0213 23:46:59.234960 2736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dcdb1d9-abfe-415f-add7-16a0f07fc6fe-calico-apiserver-certs podName:6dcdb1d9-abfe-415f-add7-16a0f07fc6fe nodeName:}" failed. No retries permitted until 2025-02-13 23:46:59.734929688 +0000 UTC m=+39.400545032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/6dcdb1d9-abfe-415f-add7-16a0f07fc6fe-calico-apiserver-certs") pod "calico-apiserver-66d4b7ccb4-g9d5d" (UID: "6dcdb1d9-abfe-415f-add7-16a0f07fc6fe") : failed to sync secret cache: timed out waiting for the condition Feb 13 23:46:59.251812 kubelet[2736]: E0213 23:46:59.251754 2736 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 23:46:59.251949 kubelet[2736]: E0213 23:46:59.251843 2736 projected.go:200] Error preparing data for projected volume kube-api-access-w4k82 for pod calico-apiserver/calico-apiserver-66d4b7ccb4-g9d5d: failed to sync configmap cache: timed out waiting for the condition Feb 13 23:46:59.252000 kubelet[2736]: E0213 23:46:59.251954 2736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dcdb1d9-abfe-415f-add7-16a0f07fc6fe-kube-api-access-w4k82 podName:6dcdb1d9-abfe-415f-add7-16a0f07fc6fe nodeName:}" failed. No retries permitted until 2025-02-13 23:46:59.751926774 +0000 UTC m=+39.417542119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w4k82" (UniqueName: "kubernetes.io/projected/6dcdb1d9-abfe-415f-add7-16a0f07fc6fe-kube-api-access-w4k82") pod "calico-apiserver-66d4b7ccb4-g9d5d" (UID: "6dcdb1d9-abfe-415f-add7-16a0f07fc6fe") : failed to sync configmap cache: timed out waiting for the condition Feb 13 23:46:59.252897 kubelet[2736]: E0213 23:46:59.252825 2736 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 23:46:59.252897 kubelet[2736]: E0213 23:46:59.252858 2736 projected.go:200] Error preparing data for projected volume kube-api-access-4k6hq for pod calico-apiserver/calico-apiserver-66d4b7ccb4-trqhb: failed to sync configmap cache: timed out waiting for the condition Feb 13 23:46:59.253057 kubelet[2736]: E0213 23:46:59.252929 2736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40-kube-api-access-4k6hq podName:a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40 nodeName:}" failed. No retries permitted until 2025-02-13 23:46:59.752912285 +0000 UTC m=+39.418527637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4k6hq" (UniqueName: "kubernetes.io/projected/a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40-kube-api-access-4k6hq") pod "calico-apiserver-66d4b7ccb4-trqhb" (UID: "a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40") : failed to sync configmap cache: timed out waiting for the condition Feb 13 23:46:59.797947 kubelet[2736]: I0213 23:46:59.796571 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:46:59.799844 containerd[1511]: time="2025-02-13T23:46:59.799378218Z" level=info msg="StopPodSandbox for \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\"" Feb 13 23:46:59.799844 containerd[1511]: time="2025-02-13T23:46:59.799671093Z" level=info msg="Ensure that sandbox 65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5 in task-service has been cleanup successfully" Feb 13 23:46:59.845139 containerd[1511]: time="2025-02-13T23:46:59.845017446Z" level=error msg="StopPodSandbox for \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\" failed" error="failed to destroy network for sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:46:59.845758 kubelet[2736]: E0213 23:46:59.845492 2736 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:46:59.845758 kubelet[2736]: E0213 23:46:59.845590 2736 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5"} Feb 13 23:46:59.845758 kubelet[2736]: E0213 23:46:59.845670 2736 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1e82caf-91d2-4fb9-9ee6-89b2d78222a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 23:46:59.845758 kubelet[2736]: E0213 23:46:59.845708 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1e82caf-91d2-4fb9-9ee6-89b2d78222a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwh7r" podUID="d1e82caf-91d2-4fb9-9ee6-89b2d78222a5" Feb 13 23:46:59.918799 containerd[1511]: time="2025-02-13T23:46:59.918745221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d4b7ccb4-trqhb,Uid:a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40,Namespace:calico-apiserver,Attempt:0,}" Feb 13 23:46:59.934386 containerd[1511]: time="2025-02-13T23:46:59.934143391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d4b7ccb4-g9d5d,Uid:6dcdb1d9-abfe-415f-add7-16a0f07fc6fe,Namespace:calico-apiserver,Attempt:0,}" Feb 13 23:47:00.048430 containerd[1511]: time="2025-02-13T23:47:00.048259801Z" level=error msg="Failed to destroy network for sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:47:00.050681 containerd[1511]: time="2025-02-13T23:47:00.050635795Z" level=error msg="encountered an error cleaning up failed sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:47:00.051551 containerd[1511]: time="2025-02-13T23:47:00.050719039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d4b7ccb4-trqhb,Uid:a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:47:00.054694 kubelet[2736]: E0213 23:47:00.051068 2736 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:47:00.054694 kubelet[2736]: E0213 23:47:00.051179 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d4b7ccb4-trqhb" Feb 13 23:47:00.054694 kubelet[2736]: E0213 23:47:00.051219 2736 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d4b7ccb4-trqhb" Feb 13 23:47:00.053266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055-shm.mount: Deactivated successfully. Feb 13 23:47:00.057226 kubelet[2736]: E0213 23:47:00.051611 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66d4b7ccb4-trqhb_calico-apiserver(a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66d4b7ccb4-trqhb_calico-apiserver(a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d4b7ccb4-trqhb" podUID="a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40" Feb 13 23:47:00.077933 containerd[1511]: time="2025-02-13T23:47:00.077848886Z" level=error msg="Failed to destroy network for sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:47:00.078549 containerd[1511]: time="2025-02-13T23:47:00.078453759Z" level=error msg="encountered an error cleaning up failed sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:47:00.078661 containerd[1511]: time="2025-02-13T23:47:00.078613566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d4b7ccb4-g9d5d,Uid:6dcdb1d9-abfe-415f-add7-16a0f07fc6fe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:47:00.079171 kubelet[2736]: E0213 23:47:00.079080 2736 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:47:00.079282 kubelet[2736]: E0213 23:47:00.079197 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d4b7ccb4-g9d5d" Feb 13 23:47:00.079420 kubelet[2736]: E0213 23:47:00.079287 2736 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d4b7ccb4-g9d5d" Feb 13 23:47:00.080343 kubelet[2736]: E0213 23:47:00.079524 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66d4b7ccb4-g9d5d_calico-apiserver(6dcdb1d9-abfe-415f-add7-16a0f07fc6fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66d4b7ccb4-g9d5d_calico-apiserver(6dcdb1d9-abfe-415f-add7-16a0f07fc6fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d4b7ccb4-g9d5d" podUID="6dcdb1d9-abfe-415f-add7-16a0f07fc6fe" Feb 13 23:47:00.801324 kubelet[2736]: I0213 23:47:00.801118 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:00.806196 containerd[1511]: time="2025-02-13T23:47:00.802120946Z" level=info msg="StopPodSandbox for \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\"" Feb 13 23:47:00.806196 containerd[1511]: time="2025-02-13T23:47:00.802392681Z" level=info msg="Ensure that sandbox e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055 in task-service has been cleanup successfully" Feb 13 23:47:00.819015 kubelet[2736]: I0213 23:47:00.818076 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:00.822426 containerd[1511]: time="2025-02-13T23:47:00.822192342Z" level=info msg="StopPodSandbox for \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\"" Feb 13 23:47:00.822963 containerd[1511]: time="2025-02-13T23:47:00.822472226Z" level=info msg="Ensure that sandbox a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6 in task-service has been cleanup successfully" Feb 13 23:47:00.904159 containerd[1511]: time="2025-02-13T23:47:00.904011786Z" level=error msg="StopPodSandbox for \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\" failed" error="failed to destroy network for sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:47:00.904787 kubelet[2736]: E0213 23:47:00.904394 2736 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:00.904787 kubelet[2736]: E0213 23:47:00.904482 2736 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055"} Feb 13 23:47:00.904787 kubelet[2736]: E0213 23:47:00.904536 2736 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 23:47:00.904787 kubelet[2736]: E0213 23:47:00.904574 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d4b7ccb4-trqhb" podUID="a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40" Feb 13 23:47:00.926928 containerd[1511]: time="2025-02-13T23:47:00.926718717Z" level=error msg="StopPodSandbox for \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\" failed" error="failed to destroy network for sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 23:47:00.927696 kubelet[2736]: E0213 23:47:00.927466 2736 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:00.927696 kubelet[2736]: E0213 23:47:00.927542 2736 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6"} Feb 13 23:47:00.927696 kubelet[2736]: E0213 23:47:00.927595 2736 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6dcdb1d9-abfe-415f-add7-16a0f07fc6fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 23:47:00.927696 kubelet[2736]: E0213 23:47:00.927630 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6dcdb1d9-abfe-415f-add7-16a0f07fc6fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d4b7ccb4-g9d5d" podUID="6dcdb1d9-abfe-415f-add7-16a0f07fc6fe" Feb 13 23:47:00.932269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6-shm.mount: Deactivated successfully. Feb 13 23:47:06.297859 kubelet[2736]: I0213 23:47:06.296780 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 23:47:09.502692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785389596.mount: Deactivated successfully. Feb 13 23:47:09.597619 containerd[1511]: time="2025-02-13T23:47:09.593508515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 23:47:09.598440 containerd[1511]: time="2025-02-13T23:47:09.590327414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:09.651721 containerd[1511]: time="2025-02-13T23:47:09.651654489Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:09.652873 containerd[1511]: time="2025-02-13T23:47:09.652823398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.929153881s" Feb 13 23:47:09.652963 containerd[1511]: time="2025-02-13T23:47:09.652880549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 23:47:09.653844 containerd[1511]: time="2025-02-13T23:47:09.653804747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:09.703716 containerd[1511]: time="2025-02-13T23:47:09.703661750Z" level=info msg="CreateContainer within sandbox \"abf7a5ca0fb7e2638265b7fab23f231b60053c1cd9bb9a158da990e959d8d58b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 23:47:09.774893 containerd[1511]: time="2025-02-13T23:47:09.772784759Z" level=info msg="CreateContainer within sandbox \"abf7a5ca0fb7e2638265b7fab23f231b60053c1cd9bb9a158da990e959d8d58b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2184529cefce184f446e2a08d9aefbcf2df138cdb7557f07ae17574a6dd1e1ee\"" Feb 13 23:47:09.782656 containerd[1511]: time="2025-02-13T23:47:09.782595234Z" level=info msg="StartContainer for \"2184529cefce184f446e2a08d9aefbcf2df138cdb7557f07ae17574a6dd1e1ee\"" Feb 13 23:47:09.918662 systemd[1]: Started cri-containerd-2184529cefce184f446e2a08d9aefbcf2df138cdb7557f07ae17574a6dd1e1ee.scope - libcontainer container 2184529cefce184f446e2a08d9aefbcf2df138cdb7557f07ae17574a6dd1e1ee. Feb 13 23:47:09.994059 containerd[1511]: time="2025-02-13T23:47:09.993913072Z" level=info msg="StartContainer for \"2184529cefce184f446e2a08d9aefbcf2df138cdb7557f07ae17574a6dd1e1ee\" returns successfully" Feb 13 23:47:10.208697 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 23:47:10.210051 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 23:47:10.560691 containerd[1511]: time="2025-02-13T23:47:10.559924792Z" level=info msg="StopPodSandbox for \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\"" Feb 13 23:47:10.567371 containerd[1511]: time="2025-02-13T23:47:10.560297136Z" level=info msg="StopPodSandbox for \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\"" Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:10.756 [INFO][3833] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:10.757 [INFO][3833] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" iface="eth0" netns="/var/run/netns/cni-7545bb3b-aad4-dfff-4956-f45ccbea20c3" Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:10.757 [INFO][3833] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" iface="eth0" netns="/var/run/netns/cni-7545bb3b-aad4-dfff-4956-f45ccbea20c3" Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:10.760 [INFO][3833] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" iface="eth0" netns="/var/run/netns/cni-7545bb3b-aad4-dfff-4956-f45ccbea20c3" Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:10.760 [INFO][3833] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:10.760 [INFO][3833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:10.981 [INFO][3857] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" HandleID="k8s-pod-network.5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:10.982 [INFO][3857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:10.983 [INFO][3857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:10.999 [WARNING][3857] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" HandleID="k8s-pod-network.5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:10.999 [INFO][3857] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" HandleID="k8s-pod-network.5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:11.001 [INFO][3857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:11.007170 containerd[1511]: 2025-02-13 23:47:11.004 [INFO][3833] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:11.010552 containerd[1511]: time="2025-02-13T23:47:11.009532372Z" level=info msg="TearDown network for sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\" successfully" Feb 13 23:47:11.010552 containerd[1511]: time="2025-02-13T23:47:11.010519579Z" level=info msg="StopPodSandbox for \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\" returns successfully" Feb 13 23:47:11.013321 containerd[1511]: time="2025-02-13T23:47:11.012683658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zfpxh,Uid:86f73b1f-c829-49fc-b6e2-286cf7bba006,Namespace:kube-system,Attempt:1,}" Feb 13 23:47:11.015105 systemd[1]: run-netns-cni\x2d7545bb3b\x2daad4\x2ddfff\x2d4956\x2df45ccbea20c3.mount: Deactivated successfully. Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:10.750 [INFO][3838] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:10.753 [INFO][3838] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" iface="eth0" netns="/var/run/netns/cni-699aab08-74d4-7352-8204-91daf0529507" Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:10.754 [INFO][3838] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" iface="eth0" netns="/var/run/netns/cni-699aab08-74d4-7352-8204-91daf0529507" Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:10.758 [INFO][3838] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" iface="eth0" netns="/var/run/netns/cni-699aab08-74d4-7352-8204-91daf0529507" Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:10.758 [INFO][3838] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:10.758 [INFO][3838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:10.980 [INFO][3856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" HandleID="k8s-pod-network.175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:10.982 [INFO][3856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:11.001 [INFO][3856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:11.018 [WARNING][3856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" HandleID="k8s-pod-network.175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:11.018 [INFO][3856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" HandleID="k8s-pod-network.175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:11.021 [INFO][3856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:11.030892 containerd[1511]: 2025-02-13 23:47:11.026 [INFO][3838] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:11.032431 containerd[1511]: time="2025-02-13T23:47:11.031058868Z" level=info msg="TearDown network for sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\" successfully" Feb 13 23:47:11.032431 containerd[1511]: time="2025-02-13T23:47:11.031096389Z" level=info msg="StopPodSandbox for \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\" returns successfully" Feb 13 23:47:11.033981 containerd[1511]: time="2025-02-13T23:47:11.033431450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655c684b7f-hp2zr,Uid:a5485d8a-381a-4e03-acb1-c089decfc6bb,Namespace:calico-system,Attempt:1,}" Feb 13 23:47:11.038579 systemd[1]: run-netns-cni\x2d699aab08\x2d74d4\x2d7352\x2d8204\x2d91daf0529507.mount: Deactivated successfully. Feb 13 23:47:11.284837 systemd-networkd[1439]: cali1e1816d5c97: Link UP Feb 13 23:47:11.289330 systemd-networkd[1439]: cali1e1816d5c97: Gained carrier Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.104 [INFO][3873] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.124 [INFO][3873] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0 coredns-7db6d8ff4d- kube-system 86f73b1f-c829-49fc-b6e2-286cf7bba006 759 0 2025-02-13 23:46:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gs5j1.gb1.brightbox.com coredns-7db6d8ff4d-zfpxh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e1816d5c97 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zfpxh" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.125 [INFO][3873] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zfpxh" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.187 [INFO][3898] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" HandleID="k8s-pod-network.36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.207 [INFO][3898] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" HandleID="k8s-pod-network.36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003199c0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gs5j1.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-zfpxh", "timestamp":"2025-02-13 23:47:11.187460944 +0000 UTC"}, Hostname:"srv-gs5j1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.207 [INFO][3898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.207 [INFO][3898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.208 [INFO][3898] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gs5j1.gb1.brightbox.com' Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.214 [INFO][3898] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.230 [INFO][3898] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.236 [INFO][3898] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.238 [INFO][3898] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.241 [INFO][3898] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.241 [INFO][3898] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.244 [INFO][3898] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.250 [INFO][3898] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.259 [INFO][3898] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.129/26] block=192.168.106.128/26 handle="k8s-pod-network.36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.259 [INFO][3898] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.129/26] handle="k8s-pod-network.36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.259 [INFO][3898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:11.325497 containerd[1511]: 2025-02-13 23:47:11.259 [INFO][3898] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.129/26] IPv6=[] ContainerID="36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" HandleID="k8s-pod-network.36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:11.332985 containerd[1511]: 2025-02-13 23:47:11.262 [INFO][3873] cni-plugin/k8s.go 386: Populated endpoint ContainerID="36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zfpxh" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"86f73b1f-c829-49fc-b6e2-286cf7bba006", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-zfpxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e1816d5c97", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:11.332985 containerd[1511]: 2025-02-13 23:47:11.262 [INFO][3873] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.129/32] ContainerID="36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zfpxh" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:11.332985 containerd[1511]: 2025-02-13 23:47:11.262 [INFO][3873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e1816d5c97 ContainerID="36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zfpxh" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:11.332985 containerd[1511]: 2025-02-13 23:47:11.281 [INFO][3873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zfpxh" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:11.332985 containerd[1511]: 2025-02-13 23:47:11.288 [INFO][3873] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zfpxh" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"86f73b1f-c829-49fc-b6e2-286cf7bba006", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f", Pod:"coredns-7db6d8ff4d-zfpxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e1816d5c97", MAC:"6e:15:95:a4:b5:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:11.332985 containerd[1511]: 2025-02-13 23:47:11.320 [INFO][3873] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zfpxh" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:11.341739 kubelet[2736]: I0213 23:47:11.338025 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tvnmz" podStartSLOduration=2.659955696 podStartE2EDuration="27.323617638s" podCreationTimestamp="2025-02-13 23:46:44 +0000 UTC" firstStartedPulling="2025-02-13 23:46:44.994201591 +0000 UTC m=+24.659816934" lastFinishedPulling="2025-02-13 23:47:09.657863525 +0000 UTC m=+49.323478876" observedRunningTime="2025-02-13 23:47:10.923410257 +0000 UTC m=+50.589025632" watchObservedRunningTime="2025-02-13 23:47:11.323617638 +0000 UTC m=+50.989232990" Feb 13 23:47:11.357635 systemd-networkd[1439]: calib8903c17bad: Link UP Feb 13 23:47:11.357976 systemd-networkd[1439]: calib8903c17bad: Gained carrier Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.127 [INFO][3874] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.150 [INFO][3874] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0 calico-kube-controllers-655c684b7f- calico-system a5485d8a-381a-4e03-acb1-c089decfc6bb 760 0 2025-02-13 23:46:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:655c684b7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-gs5j1.gb1.brightbox.com calico-kube-controllers-655c684b7f-hp2zr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib8903c17bad [] []}} ContainerID="fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" Namespace="calico-system" Pod="calico-kube-controllers-655c684b7f-hp2zr" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.150 [INFO][3874] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" Namespace="calico-system" Pod="calico-kube-controllers-655c684b7f-hp2zr" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.215 [INFO][3902] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" HandleID="k8s-pod-network.fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.229 [INFO][3902] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" HandleID="k8s-pod-network.fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed330), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gs5j1.gb1.brightbox.com", "pod":"calico-kube-controllers-655c684b7f-hp2zr", "timestamp":"2025-02-13 23:47:11.215865669 +0000 UTC"}, Hostname:"srv-gs5j1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.229 [INFO][3902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.259 [INFO][3902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.261 [INFO][3902] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gs5j1.gb1.brightbox.com' Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.266 [INFO][3902] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.276 [INFO][3902] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.287 [INFO][3902] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.291 [INFO][3902] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.295 [INFO][3902] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.296 [INFO][3902] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.300 [INFO][3902] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860 Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.306 [INFO][3902] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.313 [INFO][3902] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.130/26] block=192.168.106.128/26 handle="k8s-pod-network.fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.313 [INFO][3902] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.130/26] handle="k8s-pod-network.fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.313 [INFO][3902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:11.398382 containerd[1511]: 2025-02-13 23:47:11.313 [INFO][3902] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.130/26] IPv6=[] ContainerID="fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" HandleID="k8s-pod-network.fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:11.400509 containerd[1511]: 2025-02-13 23:47:11.332 [INFO][3874] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" Namespace="calico-system" Pod="calico-kube-controllers-655c684b7f-hp2zr" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0", GenerateName:"calico-kube-controllers-655c684b7f-", Namespace:"calico-system", SelfLink:"", UID:"a5485d8a-381a-4e03-acb1-c089decfc6bb", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655c684b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-655c684b7f-hp2zr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8903c17bad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:11.400509 containerd[1511]: 2025-02-13 23:47:11.334 [INFO][3874] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.130/32] ContainerID="fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" Namespace="calico-system" Pod="calico-kube-controllers-655c684b7f-hp2zr" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:11.400509 containerd[1511]: 2025-02-13 23:47:11.334 [INFO][3874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8903c17bad ContainerID="fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" Namespace="calico-system" Pod="calico-kube-controllers-655c684b7f-hp2zr" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:11.400509 containerd[1511]: 2025-02-13 23:47:11.356 [INFO][3874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" Namespace="calico-system" Pod="calico-kube-controllers-655c684b7f-hp2zr" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:11.400509 containerd[1511]: 2025-02-13 23:47:11.356 [INFO][3874] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" Namespace="calico-system" Pod="calico-kube-controllers-655c684b7f-hp2zr" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0", GenerateName:"calico-kube-controllers-655c684b7f-", Namespace:"calico-system", SelfLink:"", UID:"a5485d8a-381a-4e03-acb1-c089decfc6bb", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655c684b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860", Pod:"calico-kube-controllers-655c684b7f-hp2zr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8903c17bad", MAC:"16:2d:77:7e:38:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:11.400509 containerd[1511]: 2025-02-13 23:47:11.388 [INFO][3874] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860" Namespace="calico-system" Pod="calico-kube-controllers-655c684b7f-hp2zr" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:11.439203 containerd[1511]: time="2025-02-13T23:47:11.438981217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:47:11.439882 containerd[1511]: time="2025-02-13T23:47:11.439550879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:47:11.440889 containerd[1511]: time="2025-02-13T23:47:11.440673058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:11.441026 containerd[1511]: time="2025-02-13T23:47:11.440877264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:11.474562 containerd[1511]: time="2025-02-13T23:47:11.473354565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:47:11.475017 containerd[1511]: time="2025-02-13T23:47:11.474354449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:47:11.475017 containerd[1511]: time="2025-02-13T23:47:11.474505751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:11.477509 containerd[1511]: time="2025-02-13T23:47:11.474949581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:11.482499 systemd[1]: Started cri-containerd-36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f.scope - libcontainer container 36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f. Feb 13 23:47:11.537471 systemd[1]: Started cri-containerd-fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860.scope - libcontainer container fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860. Feb 13 23:47:11.543782 containerd[1511]: time="2025-02-13T23:47:11.543738299Z" level=info msg="StopPodSandbox for \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\"" Feb 13 23:47:11.594311 containerd[1511]: time="2025-02-13T23:47:11.593286696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zfpxh,Uid:86f73b1f-c829-49fc-b6e2-286cf7bba006,Namespace:kube-system,Attempt:1,} returns sandbox id \"36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f\"" Feb 13 23:47:11.620054 containerd[1511]: time="2025-02-13T23:47:11.619993852Z" level=info msg="CreateContainer within sandbox \"36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 23:47:11.693393 containerd[1511]: time="2025-02-13T23:47:11.693303667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655c684b7f-hp2zr,Uid:a5485d8a-381a-4e03-acb1-c089decfc6bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860\"" Feb 13 23:47:11.697752 containerd[1511]: time="2025-02-13T23:47:11.697684759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 23:47:11.776150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971119227.mount: Deactivated successfully. Feb 13 23:47:11.802072 containerd[1511]: time="2025-02-13T23:47:11.801388617Z" level=info msg="CreateContainer within sandbox \"36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60454752d3a9b43c91e999c3e0d597f8f0cacae1e9d567ed8929de2f269bb930\"" Feb 13 23:47:11.804779 containerd[1511]: time="2025-02-13T23:47:11.803491930Z" level=info msg="StartContainer for \"60454752d3a9b43c91e999c3e0d597f8f0cacae1e9d567ed8929de2f269bb930\"" Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.729 [INFO][4015] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.731 [INFO][4015] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" iface="eth0" netns="/var/run/netns/cni-94a0d0e3-3452-57a2-0313-17f1af7c480b" Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.732 [INFO][4015] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" iface="eth0" netns="/var/run/netns/cni-94a0d0e3-3452-57a2-0313-17f1af7c480b" Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.732 [INFO][4015] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" iface="eth0" netns="/var/run/netns/cni-94a0d0e3-3452-57a2-0313-17f1af7c480b" Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.732 [INFO][4015] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.732 [INFO][4015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.777 [INFO][4034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" HandleID="k8s-pod-network.65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.781 [INFO][4034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.782 [INFO][4034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.799 [WARNING][4034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" HandleID="k8s-pod-network.65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.800 [INFO][4034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" HandleID="k8s-pod-network.65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.804 [INFO][4034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:11.814241 containerd[1511]: 2025-02-13 23:47:11.808 [INFO][4015] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:11.816643 containerd[1511]: time="2025-02-13T23:47:11.814394252Z" level=info msg="TearDown network for sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\" successfully" Feb 13 23:47:11.816643 containerd[1511]: time="2025-02-13T23:47:11.814452214Z" level=info msg="StopPodSandbox for \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\" returns successfully" Feb 13 23:47:11.817829 containerd[1511]: time="2025-02-13T23:47:11.817146644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwh7r,Uid:d1e82caf-91d2-4fb9-9ee6-89b2d78222a5,Namespace:calico-system,Attempt:1,}" Feb 13 23:47:11.868705 systemd[1]: Started cri-containerd-60454752d3a9b43c91e999c3e0d597f8f0cacae1e9d567ed8929de2f269bb930.scope - libcontainer container 60454752d3a9b43c91e999c3e0d597f8f0cacae1e9d567ed8929de2f269bb930. Feb 13 23:47:12.069204 containerd[1511]: time="2025-02-13T23:47:12.069012584Z" level=info msg="StartContainer for \"60454752d3a9b43c91e999c3e0d597f8f0cacae1e9d567ed8929de2f269bb930\" returns successfully" Feb 13 23:47:12.385350 systemd-networkd[1439]: calif4c0a3723c4: Link UP Feb 13 23:47:12.387152 systemd-networkd[1439]: calif4c0a3723c4: Gained carrier Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:11.962 [INFO][4090] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:11.992 [INFO][4090] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0 csi-node-driver- calico-system d1e82caf-91d2-4fb9-9ee6-89b2d78222a5 773 0 2025-02-13 23:46:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-gs5j1.gb1.brightbox.com csi-node-driver-rwh7r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif4c0a3723c4 [] []}} ContainerID="4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" Namespace="calico-system" Pod="csi-node-driver-rwh7r" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:11.992 [INFO][4090] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" Namespace="calico-system" Pod="csi-node-driver-rwh7r" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.191 [INFO][4133] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" HandleID="k8s-pod-network.4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.283 [INFO][4133] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" HandleID="k8s-pod-network.4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000f8280), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gs5j1.gb1.brightbox.com", "pod":"csi-node-driver-rwh7r", "timestamp":"2025-02-13 23:47:12.191069869 +0000 UTC"}, Hostname:"srv-gs5j1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.283 [INFO][4133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.283 [INFO][4133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.283 [INFO][4133] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gs5j1.gb1.brightbox.com' Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.293 [INFO][4133] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.315 [INFO][4133] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.332 [INFO][4133] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.339 [INFO][4133] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.343 [INFO][4133] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.345 [INFO][4133] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.348 [INFO][4133] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.357 [INFO][4133] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.368 [INFO][4133] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.131/26] block=192.168.106.128/26 handle="k8s-pod-network.4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.369 [INFO][4133] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.131/26] handle="k8s-pod-network.4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.369 [INFO][4133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:12.410873 containerd[1511]: 2025-02-13 23:47:12.369 [INFO][4133] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.131/26] IPv6=[] ContainerID="4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" HandleID="k8s-pod-network.4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:12.426782 containerd[1511]: 2025-02-13 23:47:12.377 [INFO][4090] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" Namespace="calico-system" Pod="csi-node-driver-rwh7r" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1e82caf-91d2-4fb9-9ee6-89b2d78222a5", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-rwh7r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif4c0a3723c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:12.426782 containerd[1511]: 2025-02-13 23:47:12.377 [INFO][4090] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.131/32] ContainerID="4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" Namespace="calico-system" Pod="csi-node-driver-rwh7r" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:12.426782 containerd[1511]: 2025-02-13 23:47:12.377 [INFO][4090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4c0a3723c4 ContainerID="4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" Namespace="calico-system" Pod="csi-node-driver-rwh7r" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:12.426782 containerd[1511]: 2025-02-13 23:47:12.388 [INFO][4090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" Namespace="calico-system" Pod="csi-node-driver-rwh7r" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:12.426782 containerd[1511]: 2025-02-13 23:47:12.389 [INFO][4090] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" Namespace="calico-system" Pod="csi-node-driver-rwh7r" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1e82caf-91d2-4fb9-9ee6-89b2d78222a5", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f", Pod:"csi-node-driver-rwh7r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif4c0a3723c4", MAC:"82:e1:a2:ed:39:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:12.426782 containerd[1511]: 2025-02-13 23:47:12.407 [INFO][4090] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f" Namespace="calico-system" Pod="csi-node-driver-rwh7r" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:12.492363 systemd-networkd[1439]: calib8903c17bad: Gained IPv6LL Feb 13 23:47:12.503645 containerd[1511]: time="2025-02-13T23:47:12.501376415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:47:12.503645 containerd[1511]: time="2025-02-13T23:47:12.501585490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:47:12.503645 containerd[1511]: time="2025-02-13T23:47:12.501636365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:12.503645 containerd[1511]: time="2025-02-13T23:47:12.501843612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:12.506502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272031471.mount: Deactivated successfully. Feb 13 23:47:12.506695 systemd[1]: run-netns-cni\x2d94a0d0e3\x2d3452\x2d57a2\x2d0313\x2d17f1af7c480b.mount: Deactivated successfully. Feb 13 23:47:12.545387 containerd[1511]: time="2025-02-13T23:47:12.544606172Z" level=info msg="StopPodSandbox for \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\"" Feb 13 23:47:12.551751 containerd[1511]: time="2025-02-13T23:47:12.551700638Z" level=info msg="StopPodSandbox for \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\"" Feb 13 23:47:12.562553 systemd[1]: Started cri-containerd-4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f.scope - libcontainer container 4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f. Feb 13 23:47:12.619287 systemd-networkd[1439]: cali1e1816d5c97: Gained IPv6LL Feb 13 23:47:12.742015 containerd[1511]: time="2025-02-13T23:47:12.741959121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwh7r,Uid:d1e82caf-91d2-4fb9-9ee6-89b2d78222a5,Namespace:calico-system,Attempt:1,} returns sandbox id \"4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f\"" Feb 13 23:47:12.965534 kubelet[2736]: I0213 23:47:12.965217 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zfpxh" podStartSLOduration=36.965194698 podStartE2EDuration="36.965194698s" podCreationTimestamp="2025-02-13 23:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 23:47:12.962716755 +0000 UTC m=+52.628332126" watchObservedRunningTime="2025-02-13 23:47:12.965194698 +0000 UTC m=+52.630810061" Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.829 [INFO][4278] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.829 [INFO][4278] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" iface="eth0" netns="/var/run/netns/cni-def1f49d-9733-9866-c5b0-d3831772bde1" Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.829 [INFO][4278] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" iface="eth0" netns="/var/run/netns/cni-def1f49d-9733-9866-c5b0-d3831772bde1" Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.830 [INFO][4278] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" iface="eth0" netns="/var/run/netns/cni-def1f49d-9733-9866-c5b0-d3831772bde1" Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.830 [INFO][4278] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.830 [INFO][4278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.955 [INFO][4301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" HandleID="k8s-pod-network.90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.956 [INFO][4301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.956 [INFO][4301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.976 [WARNING][4301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" HandleID="k8s-pod-network.90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.976 [INFO][4301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" HandleID="k8s-pod-network.90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.979 [INFO][4301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:12.988510 containerd[1511]: 2025-02-13 23:47:12.982 [INFO][4278] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:12.996595 containerd[1511]: time="2025-02-13T23:47:12.993330979Z" level=info msg="TearDown network for sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\" successfully" Feb 13 23:47:12.996595 containerd[1511]: time="2025-02-13T23:47:12.993376516Z" level=info msg="StopPodSandbox for \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\" returns successfully" Feb 13 23:47:12.998377 systemd[1]: run-netns-cni\x2ddef1f49d\x2d9733\x2d9866\x2dc5b0\x2dd3831772bde1.mount: Deactivated successfully. Feb 13 23:47:13.001394 containerd[1511]: time="2025-02-13T23:47:12.999060783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xsvvc,Uid:96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee,Namespace:kube-system,Attempt:1,}" Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:12.833 [INFO][4279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:12.835 [INFO][4279] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" iface="eth0" netns="/var/run/netns/cni-6f77a9da-03ed-7b8e-911d-748be83a4e72" Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:12.836 [INFO][4279] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" iface="eth0" netns="/var/run/netns/cni-6f77a9da-03ed-7b8e-911d-748be83a4e72" Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:12.838 [INFO][4279] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" iface="eth0" netns="/var/run/netns/cni-6f77a9da-03ed-7b8e-911d-748be83a4e72" Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:12.838 [INFO][4279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:12.839 [INFO][4279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:12.974 [INFO][4302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" HandleID="k8s-pod-network.e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:12.975 [INFO][4302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:12.979 [INFO][4302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:13.000 [WARNING][4302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" HandleID="k8s-pod-network.e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:13.000 [INFO][4302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" HandleID="k8s-pod-network.e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:13.005 [INFO][4302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:13.014812 containerd[1511]: 2025-02-13 23:47:13.010 [INFO][4279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:13.018750 containerd[1511]: time="2025-02-13T23:47:13.015554353Z" level=info msg="TearDown network for sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\" successfully" Feb 13 23:47:13.018750 containerd[1511]: time="2025-02-13T23:47:13.015744828Z" level=info msg="StopPodSandbox for \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\" returns successfully" Feb 13 23:47:13.018750 containerd[1511]: time="2025-02-13T23:47:13.017485634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d4b7ccb4-trqhb,Uid:a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40,Namespace:calico-apiserver,Attempt:1,}" Feb 13 23:47:13.026180 systemd[1]: run-netns-cni\x2d6f77a9da\x2d03ed\x2d7b8e\x2d911d\x2d748be83a4e72.mount: Deactivated successfully. Feb 13 23:47:13.494280 kernel: bpftool[4410]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 23:47:13.543167 containerd[1511]: time="2025-02-13T23:47:13.542757272Z" level=info msg="StopPodSandbox for \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\"" Feb 13 23:47:13.593472 systemd-networkd[1439]: cali57997d9d43e: Link UP Feb 13 23:47:13.595662 systemd-networkd[1439]: cali57997d9d43e: Gained carrier Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.246 [INFO][4322] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.299 [INFO][4322] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0 coredns-7db6d8ff4d- kube-system 96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee 784 0 2025-02-13 23:46:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gs5j1.gb1.brightbox.com coredns-7db6d8ff4d-xsvvc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali57997d9d43e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsvvc" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.299 [INFO][4322] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsvvc" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.401 [INFO][4390] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" HandleID="k8s-pod-network.14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.431 [INFO][4390] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" HandleID="k8s-pod-network.14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000337970), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gs5j1.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-xsvvc", "timestamp":"2025-02-13 23:47:13.401920272 +0000 UTC"}, Hostname:"srv-gs5j1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.431 [INFO][4390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.431 [INFO][4390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.431 [INFO][4390] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gs5j1.gb1.brightbox.com' Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.438 [INFO][4390] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.455 [INFO][4390] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.508 [INFO][4390] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.521 [INFO][4390] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.531 [INFO][4390] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.531 [INFO][4390] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.536 [INFO][4390] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.558 [INFO][4390] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.576 [INFO][4390] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.132/26] block=192.168.106.128/26 handle="k8s-pod-network.14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.576 [INFO][4390] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.132/26] handle="k8s-pod-network.14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.576 [INFO][4390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:13.658303 containerd[1511]: 2025-02-13 23:47:13.576 [INFO][4390] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.132/26] IPv6=[] ContainerID="14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" HandleID="k8s-pod-network.14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:13.661200 containerd[1511]: 2025-02-13 23:47:13.585 [INFO][4322] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsvvc" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-xsvvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57997d9d43e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:13.661200 containerd[1511]: 2025-02-13 23:47:13.585 [INFO][4322] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.132/32] ContainerID="14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsvvc" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:13.661200 containerd[1511]: 2025-02-13 23:47:13.585 [INFO][4322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57997d9d43e ContainerID="14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsvvc" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:13.661200 containerd[1511]: 2025-02-13 23:47:13.598 [INFO][4322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsvvc" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:13.661200 containerd[1511]: 2025-02-13 23:47:13.606 [INFO][4322] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsvvc" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf", Pod:"coredns-7db6d8ff4d-xsvvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57997d9d43e", MAC:"aa:84:6b:64:5e:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:13.661200 containerd[1511]: 2025-02-13 23:47:13.634 [INFO][4322] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsvvc" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:13.725176 systemd-networkd[1439]: cali2495fda403f: Link UP Feb 13 23:47:13.729595 systemd-networkd[1439]: cali2495fda403f: Gained carrier Feb 13 23:47:13.785337 containerd[1511]: time="2025-02-13T23:47:13.783659752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:47:13.785337 containerd[1511]: time="2025-02-13T23:47:13.783987352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:47:13.785337 containerd[1511]: time="2025-02-13T23:47:13.784016142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:13.785337 containerd[1511]: time="2025-02-13T23:47:13.784392998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.257 [INFO][4333] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.290 [INFO][4333] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0 calico-apiserver-66d4b7ccb4- calico-apiserver a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40 785 0 2025-02-13 23:46:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66d4b7ccb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gs5j1.gb1.brightbox.com calico-apiserver-66d4b7ccb4-trqhb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2495fda403f [] []}} ContainerID="9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-trqhb" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.291 [INFO][4333] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-trqhb" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.425 [INFO][4389] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" HandleID="k8s-pod-network.9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.446 [INFO][4389] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" HandleID="k8s-pod-network.9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d9710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gs5j1.gb1.brightbox.com", "pod":"calico-apiserver-66d4b7ccb4-trqhb", "timestamp":"2025-02-13 23:47:13.424301154 +0000 UTC"}, Hostname:"srv-gs5j1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.446 [INFO][4389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.577 [INFO][4389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.577 [INFO][4389] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gs5j1.gb1.brightbox.com' Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.585 [INFO][4389] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.612 [INFO][4389] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.632 [INFO][4389] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.641 [INFO][4389] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.647 [INFO][4389] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.649 [INFO][4389] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.657 [INFO][4389] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09 Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.668 [INFO][4389] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.689 [INFO][4389] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.133/26] block=192.168.106.128/26 handle="k8s-pod-network.9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.689 [INFO][4389] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.133/26] handle="k8s-pod-network.9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.689 [INFO][4389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:13.786475 containerd[1511]: 2025-02-13 23:47:13.689 [INFO][4389] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.133/26] IPv6=[] ContainerID="9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" HandleID="k8s-pod-network.9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:13.788762 containerd[1511]: 2025-02-13 23:47:13.699 [INFO][4333] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-trqhb" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0", GenerateName:"calico-apiserver-66d4b7ccb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d4b7ccb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-66d4b7ccb4-trqhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2495fda403f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:13.788762 containerd[1511]: 2025-02-13 23:47:13.700 [INFO][4333] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.133/32] ContainerID="9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-trqhb" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:13.788762 containerd[1511]: 2025-02-13 23:47:13.700 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2495fda403f ContainerID="9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-trqhb" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:13.788762 containerd[1511]: 2025-02-13 23:47:13.726 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-trqhb" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:13.788762 containerd[1511]: 2025-02-13 23:47:13.732 [INFO][4333] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-trqhb" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0", GenerateName:"calico-apiserver-66d4b7ccb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d4b7ccb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09", Pod:"calico-apiserver-66d4b7ccb4-trqhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2495fda403f", MAC:"0e:df:06:ad:16:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:13.788762 containerd[1511]: 2025-02-13 23:47:13.768 [INFO][4333] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-trqhb" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:13.852817 systemd[1]: run-containerd-runc-k8s.io-14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf-runc.hBI8zX.mount: Deactivated successfully. Feb 13 23:47:13.872469 systemd[1]: Started cri-containerd-14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf.scope - libcontainer container 14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf. Feb 13 23:47:13.926281 containerd[1511]: time="2025-02-13T23:47:13.925039558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:47:13.926281 containerd[1511]: time="2025-02-13T23:47:13.925215970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:47:13.926583 containerd[1511]: time="2025-02-13T23:47:13.926467563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:13.928448 containerd[1511]: time="2025-02-13T23:47:13.927972600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:13.984522 systemd[1]: Started cri-containerd-9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09.scope - libcontainer container 9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09. Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.767 [INFO][4424] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.773 [INFO][4424] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" iface="eth0" netns="/var/run/netns/cni-8dce1fa8-2a28-180f-f342-85d304e84c24" Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.775 [INFO][4424] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" iface="eth0" netns="/var/run/netns/cni-8dce1fa8-2a28-180f-f342-85d304e84c24" Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.776 [INFO][4424] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" iface="eth0" netns="/var/run/netns/cni-8dce1fa8-2a28-180f-f342-85d304e84c24" Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.777 [INFO][4424] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.777 [INFO][4424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.932 [INFO][4462] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" HandleID="k8s-pod-network.a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.936 [INFO][4462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.943 [INFO][4462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.988 [WARNING][4462] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" HandleID="k8s-pod-network.a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.988 [INFO][4462] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" HandleID="k8s-pod-network.a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:13.995 [INFO][4462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:14.006735 containerd[1511]: 2025-02-13 23:47:14.001 [INFO][4424] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:14.033037 containerd[1511]: time="2025-02-13T23:47:14.032821992Z" level=info msg="TearDown network for sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\" successfully" Feb 13 23:47:14.034716 containerd[1511]: time="2025-02-13T23:47:14.033780022Z" level=info msg="StopPodSandbox for \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\" returns successfully" Feb 13 23:47:14.038699 containerd[1511]: time="2025-02-13T23:47:14.036231527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d4b7ccb4-g9d5d,Uid:6dcdb1d9-abfe-415f-add7-16a0f07fc6fe,Namespace:calico-apiserver,Attempt:1,}" Feb 13 23:47:14.090467 systemd-networkd[1439]: calif4c0a3723c4: Gained IPv6LL Feb 13 23:47:14.097777 containerd[1511]: time="2025-02-13T23:47:14.097591226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xsvvc,Uid:96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee,Namespace:kube-system,Attempt:1,} returns sandbox id \"14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf\"" Feb 13 23:47:14.107926 containerd[1511]: time="2025-02-13T23:47:14.107863563Z" level=info msg="CreateContainer within sandbox \"14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 23:47:14.167475 containerd[1511]: time="2025-02-13T23:47:14.167397522Z" level=info msg="CreateContainer within sandbox \"14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2637e8d7891558268d2b031d3509adefa97c319b6207582fec8c19904a9774f3\"" Feb 13 23:47:14.172093 containerd[1511]: time="2025-02-13T23:47:14.172008672Z" level=info msg="StartContainer for \"2637e8d7891558268d2b031d3509adefa97c319b6207582fec8c19904a9774f3\"" Feb 13 23:47:14.264079 containerd[1511]: time="2025-02-13T23:47:14.263492351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d4b7ccb4-trqhb,Uid:a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09\"" Feb 13 23:47:14.294117 systemd[1]: Started cri-containerd-2637e8d7891558268d2b031d3509adefa97c319b6207582fec8c19904a9774f3.scope - libcontainer container 2637e8d7891558268d2b031d3509adefa97c319b6207582fec8c19904a9774f3. Feb 13 23:47:14.403565 containerd[1511]: time="2025-02-13T23:47:14.403513075Z" level=info msg="StartContainer for \"2637e8d7891558268d2b031d3509adefa97c319b6207582fec8c19904a9774f3\" returns successfully" Feb 13 23:47:14.509115 systemd[1]: run-netns-cni\x2d8dce1fa8\x2d2a28\x2d180f\x2df342\x2d85d304e84c24.mount: Deactivated successfully. Feb 13 23:47:14.547647 systemd-networkd[1439]: calib5fbbd09e32: Link UP Feb 13 23:47:14.550181 systemd-networkd[1439]: calib5fbbd09e32: Gained carrier Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.306 [INFO][4536] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0 calico-apiserver-66d4b7ccb4- calico-apiserver 6dcdb1d9-abfe-415f-add7-16a0f07fc6fe 801 0 2025-02-13 23:46:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66d4b7ccb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gs5j1.gb1.brightbox.com calico-apiserver-66d4b7ccb4-g9d5d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib5fbbd09e32 [] []}} ContainerID="d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-g9d5d" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.306 [INFO][4536] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-g9d5d" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.385 [INFO][4583] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" HandleID="k8s-pod-network.d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.409 [INFO][4583] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" HandleID="k8s-pod-network.d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039dcf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gs5j1.gb1.brightbox.com", "pod":"calico-apiserver-66d4b7ccb4-g9d5d", "timestamp":"2025-02-13 23:47:14.385259127 +0000 UTC"}, Hostname:"srv-gs5j1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.409 [INFO][4583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.410 [INFO][4583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.410 [INFO][4583] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gs5j1.gb1.brightbox.com' Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.415 [INFO][4583] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.426 [INFO][4583] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.455 [INFO][4583] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.463 [INFO][4583] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.472 [INFO][4583] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.473 [INFO][4583] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.478 [INFO][4583] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499 Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.491 [INFO][4583] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.520 [INFO][4583] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.134/26] block=192.168.106.128/26 handle="k8s-pod-network.d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.520 [INFO][4583] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.134/26] handle="k8s-pod-network.d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" host="srv-gs5j1.gb1.brightbox.com" Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.520 [INFO][4583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:14.597491 containerd[1511]: 2025-02-13 23:47:14.520 [INFO][4583] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.134/26] IPv6=[] ContainerID="d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" HandleID="k8s-pod-network.d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:14.602479 containerd[1511]: 2025-02-13 23:47:14.529 [INFO][4536] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-g9d5d" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0", GenerateName:"calico-apiserver-66d4b7ccb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dcdb1d9-abfe-415f-add7-16a0f07fc6fe", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d4b7ccb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-66d4b7ccb4-g9d5d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5fbbd09e32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:14.602479 containerd[1511]: 2025-02-13 23:47:14.532 [INFO][4536] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.134/32] ContainerID="d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-g9d5d" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:14.602479 containerd[1511]: 2025-02-13 23:47:14.532 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5fbbd09e32 ContainerID="d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-g9d5d" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:14.602479 containerd[1511]: 2025-02-13 23:47:14.553 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-g9d5d" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:14.602479 containerd[1511]: 2025-02-13 23:47:14.555 [INFO][4536] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-g9d5d" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0", GenerateName:"calico-apiserver-66d4b7ccb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dcdb1d9-abfe-415f-add7-16a0f07fc6fe", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d4b7ccb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499", Pod:"calico-apiserver-66d4b7ccb4-g9d5d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5fbbd09e32", MAC:"9a:46:1b:7e:5d:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:14.602479 containerd[1511]: 2025-02-13 23:47:14.593 [INFO][4536] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499" Namespace="calico-apiserver" Pod="calico-apiserver-66d4b7ccb4-g9d5d" WorkloadEndpoint="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:14.656575 containerd[1511]: time="2025-02-13T23:47:14.655182867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:47:14.656575 containerd[1511]: time="2025-02-13T23:47:14.655346029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:47:14.656575 containerd[1511]: time="2025-02-13T23:47:14.655370896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:14.656575 containerd[1511]: time="2025-02-13T23:47:14.655536924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:47:14.723541 systemd[1]: Started cri-containerd-d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499.scope - libcontainer container d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499. Feb 13 23:47:14.864092 systemd-networkd[1439]: vxlan.calico: Link UP Feb 13 23:47:14.864107 systemd-networkd[1439]: vxlan.calico: Gained carrier Feb 13 23:47:14.893961 containerd[1511]: time="2025-02-13T23:47:14.893671706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d4b7ccb4-g9d5d,Uid:6dcdb1d9-abfe-415f-add7-16a0f07fc6fe,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499\"" Feb 13 23:47:15.014877 kubelet[2736]: I0213 23:47:15.014542 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xsvvc" podStartSLOduration=39.014508084 podStartE2EDuration="39.014508084s" podCreationTimestamp="2025-02-13 23:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 23:47:14.989567476 +0000 UTC m=+54.655182828" watchObservedRunningTime="2025-02-13 23:47:15.014508084 +0000 UTC m=+54.680123439" Feb 13 23:47:15.498761 systemd-networkd[1439]: cali2495fda403f: Gained IPv6LL Feb 13 23:47:15.564513 systemd-networkd[1439]: cali57997d9d43e: Gained IPv6LL Feb 13 23:47:15.883070 systemd-networkd[1439]: calib5fbbd09e32: Gained IPv6LL Feb 13 23:47:16.575311 containerd[1511]: time="2025-02-13T23:47:16.575149865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:16.577732 containerd[1511]: time="2025-02-13T23:47:16.577638866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 23:47:16.578808 containerd[1511]: time="2025-02-13T23:47:16.578127687Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:16.583096 containerd[1511]: time="2025-02-13T23:47:16.582950153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:16.585117 containerd[1511]: time="2025-02-13T23:47:16.585074910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.887253437s" Feb 13 23:47:16.585461 containerd[1511]: time="2025-02-13T23:47:16.585412805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 23:47:16.586452 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL Feb 13 23:47:16.589876 containerd[1511]: time="2025-02-13T23:47:16.589616112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 23:47:16.619916 containerd[1511]: time="2025-02-13T23:47:16.619740040Z" level=info msg="CreateContainer within sandbox \"fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 23:47:16.645549 containerd[1511]: time="2025-02-13T23:47:16.645493640Z" level=info msg="CreateContainer within sandbox \"fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"27bd84bf689485b5cc4e38e0aa45d80e141f776c46e7ba3f784d2a4a44571bf2\"" Feb 13 23:47:16.648155 containerd[1511]: time="2025-02-13T23:47:16.647207343Z" level=info msg="StartContainer for \"27bd84bf689485b5cc4e38e0aa45d80e141f776c46e7ba3f784d2a4a44571bf2\"" Feb 13 23:47:16.697549 systemd[1]: Started cri-containerd-27bd84bf689485b5cc4e38e0aa45d80e141f776c46e7ba3f784d2a4a44571bf2.scope - libcontainer container 27bd84bf689485b5cc4e38e0aa45d80e141f776c46e7ba3f784d2a4a44571bf2. Feb 13 23:47:16.782764 containerd[1511]: time="2025-02-13T23:47:16.782493786Z" level=info msg="StartContainer for \"27bd84bf689485b5cc4e38e0aa45d80e141f776c46e7ba3f784d2a4a44571bf2\" returns successfully" Feb 13 23:47:17.007827 kubelet[2736]: I0213 23:47:17.007607 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-655c684b7f-hp2zr" podStartSLOduration=28.113990573 podStartE2EDuration="33.007570504s" podCreationTimestamp="2025-02-13 23:46:44 +0000 UTC" firstStartedPulling="2025-02-13 23:47:11.695690908 +0000 UTC m=+51.361306252" lastFinishedPulling="2025-02-13 23:47:16.589270828 +0000 UTC m=+56.254886183" observedRunningTime="2025-02-13 23:47:16.999329954 +0000 UTC m=+56.664945322" watchObservedRunningTime="2025-02-13 23:47:17.007570504 +0000 UTC m=+56.673185856" Feb 13 23:47:18.550854 containerd[1511]: time="2025-02-13T23:47:18.549342560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:18.551865 containerd[1511]: time="2025-02-13T23:47:18.551171503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 23:47:18.552984 containerd[1511]: time="2025-02-13T23:47:18.552899467Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:18.558300 containerd[1511]: time="2025-02-13T23:47:18.558227709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:18.560383 containerd[1511]: time="2025-02-13T23:47:18.560333551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.97066549s" Feb 13 23:47:18.560481 containerd[1511]: time="2025-02-13T23:47:18.560382939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 23:47:18.564109 containerd[1511]: time="2025-02-13T23:47:18.563218907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 23:47:18.566899 containerd[1511]: time="2025-02-13T23:47:18.566860557Z" level=info msg="CreateContainer within sandbox \"4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 23:47:18.594941 containerd[1511]: time="2025-02-13T23:47:18.594183376Z" level=info msg="CreateContainer within sandbox \"4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"589180e2509e3124a4598e348e174ddd13ba83eae12bee7a73b66e57d4484d5d\"" Feb 13 23:47:18.595490 containerd[1511]: time="2025-02-13T23:47:18.595232200Z" level=info msg="StartContainer for \"589180e2509e3124a4598e348e174ddd13ba83eae12bee7a73b66e57d4484d5d\"" Feb 13 23:47:18.652526 systemd[1]: Started cri-containerd-589180e2509e3124a4598e348e174ddd13ba83eae12bee7a73b66e57d4484d5d.scope - libcontainer container 589180e2509e3124a4598e348e174ddd13ba83eae12bee7a73b66e57d4484d5d. Feb 13 23:47:18.720589 containerd[1511]: time="2025-02-13T23:47:18.720524269Z" level=info msg="StartContainer for \"589180e2509e3124a4598e348e174ddd13ba83eae12bee7a73b66e57d4484d5d\" returns successfully" Feb 13 23:47:20.598022 containerd[1511]: time="2025-02-13T23:47:20.597061415Z" level=info msg="StopPodSandbox for \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\"" Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.799 [WARNING][4887] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"86f73b1f-c829-49fc-b6e2-286cf7bba006", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f", Pod:"coredns-7db6d8ff4d-zfpxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e1816d5c97", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.800 [INFO][4887] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.807 [INFO][4887] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" iface="eth0" netns="" Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.807 [INFO][4887] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.807 [INFO][4887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.878 [INFO][4893] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" HandleID="k8s-pod-network.5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.878 [INFO][4893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.879 [INFO][4893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.891 [WARNING][4893] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" HandleID="k8s-pod-network.5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.891 [INFO][4893] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" HandleID="k8s-pod-network.5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.894 [INFO][4893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:20.901948 containerd[1511]: 2025-02-13 23:47:20.898 [INFO][4887] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:20.904273 containerd[1511]: time="2025-02-13T23:47:20.902828542Z" level=info msg="TearDown network for sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\" successfully" Feb 13 23:47:20.904273 containerd[1511]: time="2025-02-13T23:47:20.902983711Z" level=info msg="StopPodSandbox for \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\" returns successfully" Feb 13 23:47:20.906432 containerd[1511]: time="2025-02-13T23:47:20.905389909Z" level=info msg="RemovePodSandbox for \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\"" Feb 13 23:47:20.906432 containerd[1511]: time="2025-02-13T23:47:20.905589437Z" level=info msg="Forcibly stopping sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\"" Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.124 [WARNING][4911] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"86f73b1f-c829-49fc-b6e2-286cf7bba006", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"36dcc4f6d33d6d632d9f03e965db99f9f2ec15491a5bbc49e1b8cd9c8a0e559f", Pod:"coredns-7db6d8ff4d-zfpxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e1816d5c97", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.125 [INFO][4911] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.125 [INFO][4911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" iface="eth0" netns="" Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.125 [INFO][4911] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.125 [INFO][4911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.245 [INFO][4925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" HandleID="k8s-pod-network.5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.247 [INFO][4925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.247 [INFO][4925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.260 [WARNING][4925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" HandleID="k8s-pod-network.5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.260 [INFO][4925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" HandleID="k8s-pod-network.5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--zfpxh-eth0" Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.262 [INFO][4925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:21.269181 containerd[1511]: 2025-02-13 23:47:21.266 [INFO][4911] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf" Feb 13 23:47:21.271663 containerd[1511]: time="2025-02-13T23:47:21.271384040Z" level=info msg="TearDown network for sandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\" successfully" Feb 13 23:47:21.288029 containerd[1511]: time="2025-02-13T23:47:21.287979389Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 23:47:21.288483 containerd[1511]: time="2025-02-13T23:47:21.288450412Z" level=info msg="RemovePodSandbox \"5768503c37e4dfd65fcd2225c1f4a3e82a009669c4175b8dccc1e2f3963872cf\" returns successfully" Feb 13 23:47:21.290551 containerd[1511]: time="2025-02-13T23:47:21.290511857Z" level=info msg="StopPodSandbox for \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\"" Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.447 [WARNING][4943] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1e82caf-91d2-4fb9-9ee6-89b2d78222a5", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f", Pod:"csi-node-driver-rwh7r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif4c0a3723c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.449 [INFO][4943] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.449 [INFO][4943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" iface="eth0" netns="" Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.449 [INFO][4943] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.449 [INFO][4943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.579 [INFO][4949] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" HandleID="k8s-pod-network.65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.580 [INFO][4949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.580 [INFO][4949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.590 [WARNING][4949] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" HandleID="k8s-pod-network.65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.590 [INFO][4949] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" HandleID="k8s-pod-network.65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.592 [INFO][4949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:21.609028 containerd[1511]: 2025-02-13 23:47:21.599 [INFO][4943] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:21.613032 containerd[1511]: time="2025-02-13T23:47:21.609093551Z" level=info msg="TearDown network for sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\" successfully" Feb 13 23:47:21.613032 containerd[1511]: time="2025-02-13T23:47:21.609128708Z" level=info msg="StopPodSandbox for \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\" returns successfully" Feb 13 23:47:21.613032 containerd[1511]: time="2025-02-13T23:47:21.611515808Z" level=info msg="RemovePodSandbox for \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\"" Feb 13 23:47:21.613032 containerd[1511]: time="2025-02-13T23:47:21.611578396Z" level=info msg="Forcibly stopping sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\"" Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.772 [WARNING][4975] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1e82caf-91d2-4fb9-9ee6-89b2d78222a5", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f", Pod:"csi-node-driver-rwh7r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif4c0a3723c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.773 [INFO][4975] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.773 [INFO][4975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" iface="eth0" netns="" Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.773 [INFO][4975] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.773 [INFO][4975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.851 [INFO][4981] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" HandleID="k8s-pod-network.65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.852 [INFO][4981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.852 [INFO][4981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.866 [WARNING][4981] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" HandleID="k8s-pod-network.65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.866 [INFO][4981] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" HandleID="k8s-pod-network.65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Workload="srv--gs5j1.gb1.brightbox.com-k8s-csi--node--driver--rwh7r-eth0" Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.870 [INFO][4981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:21.883719 containerd[1511]: 2025-02-13 23:47:21.875 [INFO][4975] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5" Feb 13 23:47:21.883719 containerd[1511]: time="2025-02-13T23:47:21.883460520Z" level=info msg="TearDown network for sandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\" successfully" Feb 13 23:47:21.889263 containerd[1511]: time="2025-02-13T23:47:21.888697480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 23:47:21.889263 containerd[1511]: time="2025-02-13T23:47:21.888862731Z" level=info msg="RemovePodSandbox \"65295381ffeb56c0d0d2d10d09f51e626396b5edf3747f15933d31b9302aacd5\" returns successfully" Feb 13 23:47:21.890379 containerd[1511]: time="2025-02-13T23:47:21.890327553Z" level=info msg="StopPodSandbox for \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\"" Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.048 [WARNING][5000] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0", GenerateName:"calico-kube-controllers-655c684b7f-", Namespace:"calico-system", SelfLink:"", UID:"a5485d8a-381a-4e03-acb1-c089decfc6bb", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655c684b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860", Pod:"calico-kube-controllers-655c684b7f-hp2zr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8903c17bad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.049 [INFO][5000] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.049 [INFO][5000] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" iface="eth0" netns="" Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.049 [INFO][5000] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.049 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.100 [INFO][5006] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" HandleID="k8s-pod-network.175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.101 [INFO][5006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.101 [INFO][5006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.112 [WARNING][5006] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" HandleID="k8s-pod-network.175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.112 [INFO][5006] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" HandleID="k8s-pod-network.175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.117 [INFO][5006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:22.122718 containerd[1511]: 2025-02-13 23:47:22.120 [INFO][5000] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:22.122718 containerd[1511]: time="2025-02-13T23:47:22.122555039Z" level=info msg="TearDown network for sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\" successfully" Feb 13 23:47:22.122718 containerd[1511]: time="2025-02-13T23:47:22.122592283Z" level=info msg="StopPodSandbox for \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\" returns successfully" Feb 13 23:47:22.124882 containerd[1511]: time="2025-02-13T23:47:22.123820460Z" level=info msg="RemovePodSandbox for \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\"" Feb 13 23:47:22.124882 containerd[1511]: time="2025-02-13T23:47:22.123873548Z" level=info msg="Forcibly stopping sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\"" Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.255 [WARNING][5024] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0", GenerateName:"calico-kube-controllers-655c684b7f-", Namespace:"calico-system", SelfLink:"", UID:"a5485d8a-381a-4e03-acb1-c089decfc6bb", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655c684b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"fe4eba632471c1062899e93cd38095dd1d60d418749b8dbc2e4f9c9e3be91860", Pod:"calico-kube-controllers-655c684b7f-hp2zr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8903c17bad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.257 [INFO][5024] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.258 [INFO][5024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" iface="eth0" netns="" Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.258 [INFO][5024] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.258 [INFO][5024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.344 [INFO][5030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" HandleID="k8s-pod-network.175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.344 [INFO][5030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.344 [INFO][5030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.358 [WARNING][5030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" HandleID="k8s-pod-network.175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.359 [INFO][5030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" HandleID="k8s-pod-network.175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--kube--controllers--655c684b7f--hp2zr-eth0" Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.362 [INFO][5030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:22.370076 containerd[1511]: 2025-02-13 23:47:22.366 [INFO][5024] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af" Feb 13 23:47:22.371167 containerd[1511]: time="2025-02-13T23:47:22.370138201Z" level=info msg="TearDown network for sandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\" successfully" Feb 13 23:47:22.392271 containerd[1511]: time="2025-02-13T23:47:22.391404411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 23:47:22.392271 containerd[1511]: time="2025-02-13T23:47:22.391493849Z" level=info msg="RemovePodSandbox \"175991e5975e6366160ad2d17f8d10b639614868a1303421927754aece0f46af\" returns successfully" Feb 13 23:47:22.398826 containerd[1511]: time="2025-02-13T23:47:22.398357953Z" level=info msg="StopPodSandbox for \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\"" Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.582 [WARNING][5049] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf", Pod:"coredns-7db6d8ff4d-xsvvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57997d9d43e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.583 [INFO][5049] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.583 [INFO][5049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" iface="eth0" netns="" Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.583 [INFO][5049] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.583 [INFO][5049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.655 [INFO][5056] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" HandleID="k8s-pod-network.90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.656 [INFO][5056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.656 [INFO][5056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.666 [WARNING][5056] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" HandleID="k8s-pod-network.90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.666 [INFO][5056] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" HandleID="k8s-pod-network.90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.669 [INFO][5056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:22.678438 containerd[1511]: 2025-02-13 23:47:22.674 [INFO][5049] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:22.678438 containerd[1511]: time="2025-02-13T23:47:22.678389330Z" level=info msg="TearDown network for sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\" successfully" Feb 13 23:47:22.682896 containerd[1511]: time="2025-02-13T23:47:22.678441051Z" level=info msg="StopPodSandbox for \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\" returns successfully" Feb 13 23:47:22.682896 containerd[1511]: time="2025-02-13T23:47:22.681304312Z" level=info msg="RemovePodSandbox for \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\"" Feb 13 23:47:22.682896 containerd[1511]: time="2025-02-13T23:47:22.681367413Z" level=info msg="Forcibly stopping sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\"" Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.845 [WARNING][5074] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"96208589-c8c3-4cfc-a4f9-c1d2e0b2d2ee", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"14471e451cc509b8094103f45e52ac9e9b3409931c6002e633fb6970cb7a1edf", Pod:"coredns-7db6d8ff4d-xsvvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57997d9d43e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.846 [INFO][5074] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.846 [INFO][5074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" iface="eth0" netns="" Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.846 [INFO][5074] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.846 [INFO][5074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.953 [INFO][5080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" HandleID="k8s-pod-network.90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.954 [INFO][5080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.954 [INFO][5080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.967 [WARNING][5080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" HandleID="k8s-pod-network.90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.967 [INFO][5080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" HandleID="k8s-pod-network.90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Workload="srv--gs5j1.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--xsvvc-eth0" Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.970 [INFO][5080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:22.977811 containerd[1511]: 2025-02-13 23:47:22.975 [INFO][5074] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad" Feb 13 23:47:22.979997 containerd[1511]: time="2025-02-13T23:47:22.978730638Z" level=info msg="TearDown network for sandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\" successfully" Feb 13 23:47:22.992305 containerd[1511]: time="2025-02-13T23:47:22.992170769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 23:47:22.992305 containerd[1511]: time="2025-02-13T23:47:22.992279827Z" level=info msg="RemovePodSandbox \"90aaa0ff76e656d199b0711b24a8755b8f8bf81061703a38f9cea7b2207070ad\" returns successfully" Feb 13 23:47:22.994949 containerd[1511]: time="2025-02-13T23:47:22.993760456Z" level=info msg="StopPodSandbox for \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\"" Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.122 [WARNING][5098] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0", GenerateName:"calico-apiserver-66d4b7ccb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dcdb1d9-abfe-415f-add7-16a0f07fc6fe", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d4b7ccb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499", Pod:"calico-apiserver-66d4b7ccb4-g9d5d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5fbbd09e32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.123 [INFO][5098] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.123 [INFO][5098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" iface="eth0" netns="" Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.124 [INFO][5098] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.124 [INFO][5098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.266 [INFO][5104] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" HandleID="k8s-pod-network.a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.266 [INFO][5104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.266 [INFO][5104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.297 [WARNING][5104] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" HandleID="k8s-pod-network.a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.298 [INFO][5104] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" HandleID="k8s-pod-network.a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.310 [INFO][5104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:23.324203 containerd[1511]: 2025-02-13 23:47:23.316 [INFO][5098] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:23.324203 containerd[1511]: time="2025-02-13T23:47:23.323556005Z" level=info msg="TearDown network for sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\" successfully" Feb 13 23:47:23.324203 containerd[1511]: time="2025-02-13T23:47:23.323594765Z" level=info msg="StopPodSandbox for \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\" returns successfully" Feb 13 23:47:23.327598 containerd[1511]: time="2025-02-13T23:47:23.324928980Z" level=info msg="RemovePodSandbox for \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\"" Feb 13 23:47:23.327598 containerd[1511]: time="2025-02-13T23:47:23.324999272Z" level=info msg="Forcibly stopping sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\"" Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.464 [WARNING][5122] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0", GenerateName:"calico-apiserver-66d4b7ccb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dcdb1d9-abfe-415f-add7-16a0f07fc6fe", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d4b7ccb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499", Pod:"calico-apiserver-66d4b7ccb4-g9d5d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5fbbd09e32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.465 [INFO][5122] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.465 [INFO][5122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" iface="eth0" netns="" Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.465 [INFO][5122] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.465 [INFO][5122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.543 [INFO][5129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" HandleID="k8s-pod-network.a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.543 [INFO][5129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.543 [INFO][5129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.564 [WARNING][5129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" HandleID="k8s-pod-network.a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.564 [INFO][5129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" HandleID="k8s-pod-network.a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--g9d5d-eth0" Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.568 [INFO][5129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:23.582580 containerd[1511]: 2025-02-13 23:47:23.574 [INFO][5122] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6" Feb 13 23:47:23.582580 containerd[1511]: time="2025-02-13T23:47:23.581103177Z" level=info msg="TearDown network for sandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\" successfully" Feb 13 23:47:23.588916 containerd[1511]: time="2025-02-13T23:47:23.588757534Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 23:47:23.589027 containerd[1511]: time="2025-02-13T23:47:23.588845842Z" level=info msg="RemovePodSandbox \"a5640b77b1f6781e9408499fbb0dd85678b1d67b2c92b64e6d1ab01fa1de34c6\" returns successfully" Feb 13 23:47:23.589853 containerd[1511]: time="2025-02-13T23:47:23.589697038Z" level=info msg="StopPodSandbox for \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\"" Feb 13 23:47:23.786532 containerd[1511]: time="2025-02-13T23:47:23.786465795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:23.788917 containerd[1511]: time="2025-02-13T23:47:23.788335688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 23:47:23.798630 containerd[1511]: time="2025-02-13T23:47:23.798524453Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.726 [WARNING][5147] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0", GenerateName:"calico-apiserver-66d4b7ccb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d4b7ccb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09", Pod:"calico-apiserver-66d4b7ccb4-trqhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2495fda403f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.726 [INFO][5147] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.727 [INFO][5147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" iface="eth0" netns="" Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.727 [INFO][5147] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.727 [INFO][5147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.771 [INFO][5153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" HandleID="k8s-pod-network.e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.772 [INFO][5153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.772 [INFO][5153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.790 [WARNING][5153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" HandleID="k8s-pod-network.e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.790 [INFO][5153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" HandleID="k8s-pod-network.e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.796 [INFO][5153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:23.802422 containerd[1511]: 2025-02-13 23:47:23.798 [INFO][5147] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:23.803585 containerd[1511]: time="2025-02-13T23:47:23.802469513Z" level=info msg="TearDown network for sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\" successfully" Feb 13 23:47:23.803585 containerd[1511]: time="2025-02-13T23:47:23.802499777Z" level=info msg="StopPodSandbox for \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\" returns successfully" Feb 13 23:47:23.803585 containerd[1511]: time="2025-02-13T23:47:23.803216043Z" level=info msg="RemovePodSandbox for \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\"" Feb 13 23:47:23.805340 containerd[1511]: time="2025-02-13T23:47:23.805295766Z" level=info msg="Forcibly stopping sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\"" Feb 13 23:47:23.809482 containerd[1511]: time="2025-02-13T23:47:23.809422125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:23.815394 containerd[1511]: time="2025-02-13T23:47:23.815307312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.252015549s" Feb 13 23:47:23.815394 containerd[1511]: time="2025-02-13T23:47:23.815365693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 23:47:23.817966 containerd[1511]: time="2025-02-13T23:47:23.817417898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 23:47:23.827282 containerd[1511]: time="2025-02-13T23:47:23.826082343Z" level=info msg="CreateContainer within sandbox \"9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 23:47:23.880208 containerd[1511]: time="2025-02-13T23:47:23.879358584Z" level=info msg="CreateContainer within sandbox \"9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"38e0a3155ccab4ca1b2376bd47b2688b181c87696147c06d4df13bc48d3371f3\"" Feb 13 23:47:23.888279 containerd[1511]: time="2025-02-13T23:47:23.887477037Z" level=info msg="StartContainer for \"38e0a3155ccab4ca1b2376bd47b2688b181c87696147c06d4df13bc48d3371f3\"" Feb 13 23:47:23.966489 systemd[1]: Started cri-containerd-38e0a3155ccab4ca1b2376bd47b2688b181c87696147c06d4df13bc48d3371f3.scope - libcontainer container 38e0a3155ccab4ca1b2376bd47b2688b181c87696147c06d4df13bc48d3371f3. Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.000 [WARNING][5175] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0", GenerateName:"calico-apiserver-66d4b7ccb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"a373ede0-5d5e-4f23-95b8-1a3f2d2a9b40", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 23, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d4b7ccb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gs5j1.gb1.brightbox.com", ContainerID:"9aa64654307ab259ea98c26fabed1781e69a997106cb35c3fc5b56f9b274ca09", Pod:"calico-apiserver-66d4b7ccb4-trqhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2495fda403f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.001 [INFO][5175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.001 [INFO][5175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" iface="eth0" netns="" Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.002 [INFO][5175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.002 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.070 [INFO][5200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" HandleID="k8s-pod-network.e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.070 [INFO][5200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.070 [INFO][5200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.086 [WARNING][5200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" HandleID="k8s-pod-network.e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.086 [INFO][5200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" HandleID="k8s-pod-network.e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Workload="srv--gs5j1.gb1.brightbox.com-k8s-calico--apiserver--66d4b7ccb4--trqhb-eth0" Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.110 [INFO][5200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 23:47:24.116008 containerd[1511]: 2025-02-13 23:47:24.112 [INFO][5175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055" Feb 13 23:47:24.116853 containerd[1511]: time="2025-02-13T23:47:24.116061311Z" level=info msg="TearDown network for sandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\" successfully" Feb 13 23:47:24.120505 containerd[1511]: time="2025-02-13T23:47:24.120451529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 23:47:24.120603 containerd[1511]: time="2025-02-13T23:47:24.120526673Z" level=info msg="RemovePodSandbox \"e77ff33ab4999430340fb9803e0e48a805af751eebb106afed896b2cae410055\" returns successfully" Feb 13 23:47:24.185697 containerd[1511]: time="2025-02-13T23:47:24.185512469Z" level=info msg="StartContainer for \"38e0a3155ccab4ca1b2376bd47b2688b181c87696147c06d4df13bc48d3371f3\" returns successfully" Feb 13 23:47:24.288143 containerd[1511]: time="2025-02-13T23:47:24.287219548Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:24.291170 containerd[1511]: time="2025-02-13T23:47:24.291112228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 23:47:24.297771 containerd[1511]: time="2025-02-13T23:47:24.296147730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 478.680921ms" Feb 13 23:47:24.297771 containerd[1511]: time="2025-02-13T23:47:24.296217173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 23:47:24.298308 containerd[1511]: time="2025-02-13T23:47:24.298242220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 23:47:24.301991 containerd[1511]: time="2025-02-13T23:47:24.301939688Z" level=info msg="CreateContainer within sandbox \"d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 23:47:24.324432 containerd[1511]: time="2025-02-13T23:47:24.324373843Z" level=info msg="CreateContainer within sandbox \"d428a4cd3a6362c5d17980b4a993e20ee0421b5e067004f8f31375b813b74499\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"362d635c93ee9f652a3efb7dba6e280eb2360fc8e251d98229c787df30b4405f\"" Feb 13 23:47:24.325485 containerd[1511]: time="2025-02-13T23:47:24.325448024Z" level=info msg="StartContainer for \"362d635c93ee9f652a3efb7dba6e280eb2360fc8e251d98229c787df30b4405f\"" Feb 13 23:47:24.384693 systemd[1]: Started cri-containerd-362d635c93ee9f652a3efb7dba6e280eb2360fc8e251d98229c787df30b4405f.scope - libcontainer container 362d635c93ee9f652a3efb7dba6e280eb2360fc8e251d98229c787df30b4405f. Feb 13 23:47:24.472645 containerd[1511]: time="2025-02-13T23:47:24.472360015Z" level=info msg="StartContainer for \"362d635c93ee9f652a3efb7dba6e280eb2360fc8e251d98229c787df30b4405f\" returns successfully" Feb 13 23:47:25.125376 kubelet[2736]: I0213 23:47:25.125237 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66d4b7ccb4-g9d5d" podStartSLOduration=32.725784906 podStartE2EDuration="42.125211347s" podCreationTimestamp="2025-02-13 23:46:43 +0000 UTC" firstStartedPulling="2025-02-13 23:47:14.898011293 +0000 UTC m=+54.563626631" lastFinishedPulling="2025-02-13 23:47:24.297437722 +0000 UTC m=+63.963053072" observedRunningTime="2025-02-13 23:47:25.107574168 +0000 UTC m=+64.773189533" watchObservedRunningTime="2025-02-13 23:47:25.125211347 +0000 UTC m=+64.790826694" Feb 13 23:47:26.060458 kubelet[2736]: I0213 23:47:26.059606 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 23:47:26.060458 kubelet[2736]: I0213 23:47:26.060215 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 23:47:27.958783 containerd[1511]: time="2025-02-13T23:47:27.958707911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:27.961104 containerd[1511]: time="2025-02-13T23:47:27.961019472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 23:47:27.972574 containerd[1511]: time="2025-02-13T23:47:27.971050660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.672728418s" Feb 13 23:47:27.972574 containerd[1511]: time="2025-02-13T23:47:27.971125483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 23:47:27.979293 containerd[1511]: time="2025-02-13T23:47:27.979231345Z" level=info msg="CreateContainer within sandbox \"4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 23:47:28.047371 containerd[1511]: time="2025-02-13T23:47:28.046634372Z" level=info msg="CreateContainer within sandbox \"4623900cb55f0ffef5e95529cc2ffced8289b2d9f33a0b3c6e11c07cc94e882f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c5fd52b85d9c47a86b05271e217a96916bdc25717ddf2b66b39c407b0933afc5\"" Feb 13 23:47:28.049175 containerd[1511]: time="2025-02-13T23:47:28.049134891Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:28.050099 containerd[1511]: time="2025-02-13T23:47:28.049199667Z" level=info msg="StartContainer for \"c5fd52b85d9c47a86b05271e217a96916bdc25717ddf2b66b39c407b0933afc5\"" Feb 13 23:47:28.051637 containerd[1511]: time="2025-02-13T23:47:28.051595493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:47:28.197516 systemd[1]: Started cri-containerd-c5fd52b85d9c47a86b05271e217a96916bdc25717ddf2b66b39c407b0933afc5.scope - libcontainer container c5fd52b85d9c47a86b05271e217a96916bdc25717ddf2b66b39c407b0933afc5. Feb 13 23:47:28.269033 containerd[1511]: time="2025-02-13T23:47:28.268841712Z" level=info msg="StartContainer for \"c5fd52b85d9c47a86b05271e217a96916bdc25717ddf2b66b39c407b0933afc5\" returns successfully" Feb 13 23:47:28.947137 kubelet[2736]: I0213 23:47:28.946914 2736 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 23:47:28.957187 kubelet[2736]: I0213 23:47:28.957026 2736 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 23:47:29.143279 kubelet[2736]: I0213 23:47:29.143020 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rwh7r" podStartSLOduration=29.913877164 podStartE2EDuration="45.142981963s" podCreationTimestamp="2025-02-13 23:46:44 +0000 UTC" firstStartedPulling="2025-02-13 23:47:12.746433544 +0000 UTC m=+52.412048883" lastFinishedPulling="2025-02-13 23:47:27.975538327 +0000 UTC m=+67.641153682" observedRunningTime="2025-02-13 23:47:29.141997068 +0000 UTC m=+68.807612439" watchObservedRunningTime="2025-02-13 23:47:29.142981963 +0000 UTC m=+68.808597319" Feb 13 23:47:29.143653 kubelet[2736]: I0213 23:47:29.143325 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66d4b7ccb4-trqhb" podStartSLOduration=35.594300097 podStartE2EDuration="45.143313783s" podCreationTimestamp="2025-02-13 23:46:44 +0000 UTC" firstStartedPulling="2025-02-13 23:47:14.26819354 +0000 UTC m=+53.933808879" lastFinishedPulling="2025-02-13 23:47:23.817207222 +0000 UTC m=+63.482822565" observedRunningTime="2025-02-13 23:47:25.12576313 +0000 UTC m=+64.791378487" watchObservedRunningTime="2025-02-13 23:47:29.143313783 +0000 UTC m=+68.808929148" Feb 13 23:47:37.051914 kubelet[2736]: I0213 23:47:37.051748 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 23:47:42.155939 systemd[1]: Started sshd@9-10.230.61.58:22-147.75.109.163:35384.service - OpenSSH per-connection server daemon (147.75.109.163:35384). Feb 13 23:47:43.150324 sshd[5350]: Accepted publickey for core from 147.75.109.163 port 35384 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:47:43.152647 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:47:43.167067 systemd-logind[1490]: New session 12 of user core. Feb 13 23:47:43.183081 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 23:47:44.391024 sshd[5350]: pam_unix(sshd:session): session closed for user core Feb 13 23:47:44.397833 systemd[1]: sshd@9-10.230.61.58:22-147.75.109.163:35384.service: Deactivated successfully. Feb 13 23:47:44.401507 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 23:47:44.403274 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. Feb 13 23:47:44.405466 systemd-logind[1490]: Removed session 12. Feb 13 23:47:48.800380 kubelet[2736]: I0213 23:47:48.800092 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 23:47:49.553621 systemd[1]: Started sshd@10-10.230.61.58:22-147.75.109.163:57700.service - OpenSSH per-connection server daemon (147.75.109.163:57700). Feb 13 23:47:50.499731 sshd[5390]: Accepted publickey for core from 147.75.109.163 port 57700 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:47:50.500661 sshd[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:47:50.515628 systemd-logind[1490]: New session 13 of user core. Feb 13 23:47:50.524407 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 23:47:50.565859 systemd[1]: run-containerd-runc-k8s.io-27bd84bf689485b5cc4e38e0aa45d80e141f776c46e7ba3f784d2a4a44571bf2-runc.Yeofyi.mount: Deactivated successfully. Feb 13 23:47:51.358008 sshd[5390]: pam_unix(sshd:session): session closed for user core Feb 13 23:47:51.373589 systemd[1]: sshd@10-10.230.61.58:22-147.75.109.163:57700.service: Deactivated successfully. Feb 13 23:47:51.378527 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 23:47:51.381787 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. Feb 13 23:47:51.384732 systemd-logind[1490]: Removed session 13. Feb 13 23:47:56.518689 systemd[1]: Started sshd@11-10.230.61.58:22-147.75.109.163:57716.service - OpenSSH per-connection server daemon (147.75.109.163:57716). Feb 13 23:47:57.470916 sshd[5429]: Accepted publickey for core from 147.75.109.163 port 57716 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:47:57.475610 sshd[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:47:57.484048 systemd-logind[1490]: New session 14 of user core. Feb 13 23:47:57.492579 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 23:47:58.202278 sshd[5429]: pam_unix(sshd:session): session closed for user core Feb 13 23:47:58.210558 systemd[1]: sshd@11-10.230.61.58:22-147.75.109.163:57716.service: Deactivated successfully. Feb 13 23:47:58.213572 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 23:47:58.215177 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. Feb 13 23:47:58.216686 systemd-logind[1490]: Removed session 14. Feb 13 23:47:58.360731 systemd[1]: Started sshd@12-10.230.61.58:22-147.75.109.163:57726.service - OpenSSH per-connection server daemon (147.75.109.163:57726). Feb 13 23:47:59.279710 sshd[5444]: Accepted publickey for core from 147.75.109.163 port 57726 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:47:59.283328 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:47:59.294492 systemd-logind[1490]: New session 15 of user core. Feb 13 23:47:59.299587 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 23:48:00.135897 sshd[5444]: pam_unix(sshd:session): session closed for user core Feb 13 23:48:00.148382 systemd[1]: sshd@12-10.230.61.58:22-147.75.109.163:57726.service: Deactivated successfully. Feb 13 23:48:00.151073 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 23:48:00.152366 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. Feb 13 23:48:00.154007 systemd-logind[1490]: Removed session 15. Feb 13 23:48:00.301901 systemd[1]: Started sshd@13-10.230.61.58:22-147.75.109.163:55518.service - OpenSSH per-connection server daemon (147.75.109.163:55518). Feb 13 23:48:01.215523 sshd[5474]: Accepted publickey for core from 147.75.109.163 port 55518 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:48:01.218141 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:48:01.227849 systemd-logind[1490]: New session 16 of user core. Feb 13 23:48:01.237487 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 23:48:01.956218 sshd[5474]: pam_unix(sshd:session): session closed for user core Feb 13 23:48:01.963824 systemd[1]: sshd@13-10.230.61.58:22-147.75.109.163:55518.service: Deactivated successfully. Feb 13 23:48:01.966445 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 23:48:01.967891 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. Feb 13 23:48:01.970445 systemd-logind[1490]: Removed session 16. Feb 13 23:48:07.116627 systemd[1]: Started sshd@14-10.230.61.58:22-147.75.109.163:55530.service - OpenSSH per-connection server daemon (147.75.109.163:55530). Feb 13 23:48:08.034877 sshd[5493]: Accepted publickey for core from 147.75.109.163 port 55530 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:48:08.038039 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:48:08.046336 systemd-logind[1490]: New session 17 of user core. Feb 13 23:48:08.050493 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 23:48:08.786548 sshd[5493]: pam_unix(sshd:session): session closed for user core Feb 13 23:48:08.792993 systemd[1]: sshd@14-10.230.61.58:22-147.75.109.163:55530.service: Deactivated successfully. Feb 13 23:48:08.797840 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 23:48:08.804590 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. Feb 13 23:48:08.807572 systemd-logind[1490]: Removed session 17. Feb 13 23:48:08.948654 systemd[1]: Started sshd@15-10.230.61.58:22-147.75.109.163:55544.service - OpenSSH per-connection server daemon (147.75.109.163:55544). Feb 13 23:48:09.851227 sshd[5508]: Accepted publickey for core from 147.75.109.163 port 55544 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:48:09.853835 sshd[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:48:09.861667 systemd-logind[1490]: New session 18 of user core. Feb 13 23:48:09.866482 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 23:48:10.967967 sshd[5508]: pam_unix(sshd:session): session closed for user core Feb 13 23:48:10.975755 systemd[1]: sshd@15-10.230.61.58:22-147.75.109.163:55544.service: Deactivated successfully. Feb 13 23:48:10.980328 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 23:48:10.982112 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. Feb 13 23:48:10.984889 systemd-logind[1490]: Removed session 18. Feb 13 23:48:11.131193 systemd[1]: Started sshd@16-10.230.61.58:22-147.75.109.163:45308.service - OpenSSH per-connection server daemon (147.75.109.163:45308). Feb 13 23:48:12.061953 sshd[5521]: Accepted publickey for core from 147.75.109.163 port 45308 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:48:12.064599 sshd[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:48:12.071185 systemd-logind[1490]: New session 19 of user core. Feb 13 23:48:12.078584 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 23:48:15.418344 sshd[5521]: pam_unix(sshd:session): session closed for user core Feb 13 23:48:15.431374 systemd[1]: sshd@16-10.230.61.58:22-147.75.109.163:45308.service: Deactivated successfully. Feb 13 23:48:15.434620 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 23:48:15.437065 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. Feb 13 23:48:15.439144 systemd-logind[1490]: Removed session 19. Feb 13 23:48:15.578811 systemd[1]: Started sshd@17-10.230.61.58:22-147.75.109.163:45318.service - OpenSSH per-connection server daemon (147.75.109.163:45318). Feb 13 23:48:16.524274 sshd[5541]: Accepted publickey for core from 147.75.109.163 port 45318 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:48:16.527176 sshd[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:48:16.533653 systemd-logind[1490]: New session 20 of user core. Feb 13 23:48:16.541566 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 23:48:17.822518 sshd[5541]: pam_unix(sshd:session): session closed for user core Feb 13 23:48:17.844195 systemd[1]: sshd@17-10.230.61.58:22-147.75.109.163:45318.service: Deactivated successfully. Feb 13 23:48:17.847172 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 23:48:17.850623 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. Feb 13 23:48:17.854767 systemd-logind[1490]: Removed session 20. Feb 13 23:48:17.983645 systemd[1]: Started sshd@18-10.230.61.58:22-147.75.109.163:45332.service - OpenSSH per-connection server daemon (147.75.109.163:45332). Feb 13 23:48:18.905230 sshd[5575]: Accepted publickey for core from 147.75.109.163 port 45332 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:48:18.907725 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:48:18.916110 systemd-logind[1490]: New session 21 of user core. Feb 13 23:48:18.919558 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 23:48:19.719887 sshd[5575]: pam_unix(sshd:session): session closed for user core Feb 13 23:48:19.728054 systemd[1]: sshd@18-10.230.61.58:22-147.75.109.163:45332.service: Deactivated successfully. Feb 13 23:48:19.731997 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 23:48:19.734071 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. Feb 13 23:48:19.737802 systemd-logind[1490]: Removed session 21. Feb 13 23:48:24.877834 systemd[1]: Started sshd@19-10.230.61.58:22-147.75.109.163:33846.service - OpenSSH per-connection server daemon (147.75.109.163:33846). Feb 13 23:48:25.807222 sshd[5594]: Accepted publickey for core from 147.75.109.163 port 33846 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:48:25.810730 sshd[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:48:25.820396 systemd-logind[1490]: New session 22 of user core. Feb 13 23:48:25.827547 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 23:48:26.534414 sshd[5594]: pam_unix(sshd:session): session closed for user core Feb 13 23:48:26.538980 systemd[1]: sshd@19-10.230.61.58:22-147.75.109.163:33846.service: Deactivated successfully. Feb 13 23:48:26.545755 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 23:48:26.548405 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. Feb 13 23:48:26.550332 systemd-logind[1490]: Removed session 22. Feb 13 23:48:31.700654 systemd[1]: Started sshd@20-10.230.61.58:22-147.75.109.163:43046.service - OpenSSH per-connection server daemon (147.75.109.163:43046). Feb 13 23:48:32.600689 sshd[5628]: Accepted publickey for core from 147.75.109.163 port 43046 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:48:32.603233 sshd[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:48:32.611343 systemd-logind[1490]: New session 23 of user core. Feb 13 23:48:32.617493 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 23:48:33.401761 sshd[5628]: pam_unix(sshd:session): session closed for user core Feb 13 23:48:33.410329 systemd[1]: sshd@20-10.230.61.58:22-147.75.109.163:43046.service: Deactivated successfully. Feb 13 23:48:33.410374 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. Feb 13 23:48:33.414814 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 23:48:33.418764 systemd-logind[1490]: Removed session 23.