Feb 14 00:50:31.027811 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 14 00:50:31.027849 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 00:50:31.027864 kernel: BIOS-provided physical RAM map: Feb 14 00:50:31.027879 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 14 00:50:31.027890 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 14 00:50:31.027900 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 14 00:50:31.027912 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 14 00:50:31.027922 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 14 00:50:31.027932 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 14 00:50:31.027965 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 14 00:50:31.027977 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 14 00:50:31.027987 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 14 00:50:31.028004 kernel: NX (Execute Disable) protection: active Feb 14 00:50:31.028015 kernel: APIC: Static calls initialized Feb 14 00:50:31.028027 kernel: SMBIOS 2.8 present. Feb 14 00:50:31.028039 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 14 00:50:31.028050 kernel: Hypervisor detected: KVM Feb 14 00:50:31.028066 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 14 00:50:31.028078 kernel: kvm-clock: using sched offset of 4326139839 cycles Feb 14 00:50:31.028091 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 14 00:50:31.028102 kernel: tsc: Detected 2499.998 MHz processor Feb 14 00:50:31.028114 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 14 00:50:31.028126 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 14 00:50:31.028149 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 14 00:50:31.028160 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 14 00:50:31.028171 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 14 00:50:31.028187 kernel: Using GB pages for direct mapping Feb 14 00:50:31.028211 kernel: ACPI: Early table checksum verification disabled Feb 14 00:50:31.028222 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 14 00:50:31.028234 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:50:31.028245 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:50:31.028257 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:50:31.028268 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 14 00:50:31.028279 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:50:31.028291 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:50:31.028307 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:50:31.028319 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:50:31.028330 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 14 00:50:31.028341 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 14 00:50:31.028353 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 14 00:50:31.028371 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 14 00:50:31.028383 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 14 00:50:31.028400 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 14 00:50:31.028412 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 14 00:50:31.028424 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 14 00:50:31.028436 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 14 00:50:31.028460 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 14 00:50:31.028473 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 14 00:50:31.028485 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 14 00:50:31.028503 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 14 00:50:31.028515 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 14 00:50:31.028527 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 14 00:50:31.028538 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 14 00:50:31.028550 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 14 00:50:31.028562 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 14 00:50:31.028574 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 14 00:50:31.028586 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 14 00:50:31.028598 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 14 00:50:31.028610 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 14 00:50:31.028627 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 14 00:50:31.028639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 14 00:50:31.028652 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 14 00:50:31.028663 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 14 00:50:31.028675 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 14 00:50:31.028688 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 14 00:50:31.028700 kernel: Zone ranges: Feb 14 00:50:31.028712 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 14 00:50:31.028724 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 14 00:50:31.028752 kernel: Normal empty Feb 14 00:50:31.028764 kernel: Movable zone start for each node Feb 14 00:50:31.028776 kernel: Early memory node ranges Feb 14 00:50:31.028787 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 14 00:50:31.028812 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 14 00:50:31.028824 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 14 00:50:31.028836 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 14 00:50:31.028848 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 14 00:50:31.028860 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 14 00:50:31.028872 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 14 00:50:31.028889 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 14 00:50:31.028901 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 14 00:50:31.028913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 14 00:50:31.028925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 14 00:50:31.028937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 14 00:50:31.028949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 14 00:50:31.029040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 14 00:50:31.029058 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 14 00:50:31.029071 kernel: TSC deadline timer available Feb 14 00:50:31.029090 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 14 00:50:31.029102 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 14 00:50:31.029114 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 14 00:50:31.029126 kernel: Booting paravirtualized kernel on KVM Feb 14 00:50:31.029138 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 14 00:50:31.029150 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 14 00:50:31.029163 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 14 00:50:31.029175 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 14 00:50:31.029200 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 14 00:50:31.029216 kernel: kvm-guest: PV spinlocks enabled Feb 14 00:50:31.029228 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 14 00:50:31.029241 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 00:50:31.029266 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 14 00:50:31.029278 kernel: random: crng init done Feb 14 00:50:31.029290 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 00:50:31.029302 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 14 00:50:31.029314 kernel: Fallback order for Node 0: 0 Feb 14 00:50:31.029332 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 14 00:50:31.029344 kernel: Policy zone: DMA32 Feb 14 00:50:31.029356 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 14 00:50:31.029368 kernel: software IO TLB: area num 16. Feb 14 00:50:31.029380 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 194824K reserved, 0K cma-reserved) Feb 14 00:50:31.029392 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 14 00:50:31.029421 kernel: Kernel/User page tables isolation: enabled Feb 14 00:50:31.029435 kernel: ftrace: allocating 37921 entries in 149 pages Feb 14 00:50:31.029459 kernel: ftrace: allocated 149 pages with 4 groups Feb 14 00:50:31.029479 kernel: Dynamic Preempt: voluntary Feb 14 00:50:31.029491 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 14 00:50:31.029504 kernel: rcu: RCU event tracing is enabled. Feb 14 00:50:31.029516 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 14 00:50:31.029529 kernel: Trampoline variant of Tasks RCU enabled. Feb 14 00:50:31.029553 kernel: Rude variant of Tasks RCU enabled. Feb 14 00:50:31.029571 kernel: Tracing variant of Tasks RCU enabled. Feb 14 00:50:31.029584 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 14 00:50:31.029597 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 14 00:50:31.029609 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 14 00:50:31.029622 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 14 00:50:31.029639 kernel: Console: colour VGA+ 80x25 Feb 14 00:50:31.029651 kernel: printk: console [tty0] enabled Feb 14 00:50:31.029664 kernel: printk: console [ttyS0] enabled Feb 14 00:50:31.029677 kernel: ACPI: Core revision 20230628 Feb 14 00:50:31.029689 kernel: APIC: Switch to symmetric I/O mode setup Feb 14 00:50:31.029706 kernel: x2apic enabled Feb 14 00:50:31.029719 kernel: APIC: Switched APIC routing to: physical x2apic Feb 14 00:50:31.029732 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 14 00:50:31.029744 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Feb 14 00:50:31.029757 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 14 00:50:31.029770 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 14 00:50:31.029782 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 14 00:50:31.029795 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 14 00:50:31.029807 kernel: Spectre V2 : Mitigation: Retpolines Feb 14 00:50:31.029819 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 14 00:50:31.029837 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 14 00:50:31.029850 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 14 00:50:31.029862 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 14 00:50:31.029875 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 14 00:50:31.029887 kernel: MDS: Mitigation: Clear CPU buffers Feb 14 00:50:31.029899 kernel: MMIO Stale Data: Unknown: No mitigations Feb 14 00:50:31.029912 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 14 00:50:31.029924 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 14 00:50:31.029950 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 14 00:50:31.029966 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 14 00:50:31.029979 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 14 00:50:31.029998 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 14 00:50:31.030011 kernel: Freeing SMP alternatives memory: 32K Feb 14 00:50:31.030023 kernel: pid_max: default: 32768 minimum: 301 Feb 14 00:50:31.030036 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 14 00:50:31.030048 kernel: landlock: Up and running. Feb 14 00:50:31.030061 kernel: SELinux: Initializing. Feb 14 00:50:31.030073 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 14 00:50:31.030086 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 14 00:50:31.030098 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 14 00:50:31.030111 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 00:50:31.030124 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 00:50:31.030142 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 00:50:31.030155 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 14 00:50:31.030168 kernel: signal: max sigframe size: 1776 Feb 14 00:50:31.030180 kernel: rcu: Hierarchical SRCU implementation. Feb 14 00:50:31.030193 kernel: rcu: Max phase no-delay instances is 400. Feb 14 00:50:31.030206 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 14 00:50:31.030219 kernel: smp: Bringing up secondary CPUs ... Feb 14 00:50:31.030232 kernel: smpboot: x86: Booting SMP configuration: Feb 14 00:50:31.030244 kernel: .... node #0, CPUs: #1 Feb 14 00:50:31.030262 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 14 00:50:31.030275 kernel: smp: Brought up 1 node, 2 CPUs Feb 14 00:50:31.030287 kernel: smpboot: Max logical packages: 16 Feb 14 00:50:31.030300 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Feb 14 00:50:31.030313 kernel: devtmpfs: initialized Feb 14 00:50:31.030325 kernel: x86/mm: Memory block size: 128MB Feb 14 00:50:31.030338 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 14 00:50:31.030351 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 14 00:50:31.030363 kernel: pinctrl core: initialized pinctrl subsystem Feb 14 00:50:31.030381 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 14 00:50:31.030394 kernel: audit: initializing netlink subsys (disabled) Feb 14 00:50:31.030406 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 14 00:50:31.030419 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 14 00:50:31.030432 kernel: audit: type=2000 audit(1739494229.410:1): state=initialized audit_enabled=0 res=1 Feb 14 00:50:31.030455 kernel: cpuidle: using governor menu Feb 14 00:50:31.030470 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 14 00:50:31.030482 kernel: dca service started, version 1.12.1 Feb 14 00:50:31.030495 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 14 00:50:31.030514 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 14 00:50:31.030527 kernel: PCI: Using configuration type 1 for base access Feb 14 00:50:31.030540 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 14 00:50:31.030552 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 14 00:50:31.030565 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 14 00:50:31.030578 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 14 00:50:31.030590 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 14 00:50:31.030603 kernel: ACPI: Added _OSI(Module Device) Feb 14 00:50:31.030615 kernel: ACPI: Added _OSI(Processor Device) Feb 14 00:50:31.030633 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 14 00:50:31.030646 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 14 00:50:31.030658 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 14 00:50:31.030671 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 14 00:50:31.030683 kernel: ACPI: Interpreter enabled Feb 14 00:50:31.030696 kernel: ACPI: PM: (supports S0 S5) Feb 14 00:50:31.030708 kernel: ACPI: Using IOAPIC for interrupt routing Feb 14 00:50:31.030721 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 14 00:50:31.030734 kernel: PCI: Using E820 reservations for host bridge windows Feb 14 00:50:31.030752 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 14 00:50:31.030764 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 14 00:50:31.031025 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:50:31.031207 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 14 00:50:31.031402 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 14 00:50:31.031423 kernel: PCI host bridge to bus 0000:00 Feb 14 00:50:31.031609 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 14 00:50:31.031772 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 14 00:50:31.031924 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 14 00:50:31.032109 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 14 00:50:31.032259 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 14 00:50:31.032408 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 14 00:50:31.032573 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 14 00:50:31.032761 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 14 00:50:31.032992 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 14 00:50:31.033164 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 14 00:50:31.033330 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 14 00:50:31.033508 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 14 00:50:31.033672 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 14 00:50:31.033853 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 14 00:50:31.037096 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 14 00:50:31.037288 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 14 00:50:31.037475 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 14 00:50:31.037655 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 14 00:50:31.037822 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 14 00:50:31.039069 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 14 00:50:31.039296 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 14 00:50:31.039523 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 14 00:50:31.039692 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 14 00:50:31.039878 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 14 00:50:31.041138 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 14 00:50:31.041322 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 14 00:50:31.041516 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 14 00:50:31.042141 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 14 00:50:31.042319 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 14 00:50:31.042524 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 14 00:50:31.042693 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 14 00:50:31.042857 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 14 00:50:31.044097 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 14 00:50:31.044281 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 14 00:50:31.044476 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 14 00:50:31.044647 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 14 00:50:31.044812 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 14 00:50:31.047750 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 14 00:50:31.048017 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 14 00:50:31.048195 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 14 00:50:31.048397 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 14 00:50:31.048582 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 14 00:50:31.048747 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 14 00:50:31.048918 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 14 00:50:31.049118 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 14 00:50:31.049305 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 14 00:50:31.049528 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 14 00:50:31.049702 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 14 00:50:31.049881 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 14 00:50:31.051132 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 00:50:31.051315 kernel: pci_bus 0000:02: extended config space not accessible Feb 14 00:50:31.051519 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 14 00:50:31.051709 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 14 00:50:31.051880 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 14 00:50:31.052120 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 14 00:50:31.052333 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 14 00:50:31.052517 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 14 00:50:31.052684 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 14 00:50:31.052846 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 14 00:50:31.054084 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 00:50:31.054274 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 14 00:50:31.054463 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 14 00:50:31.054635 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 14 00:50:31.054809 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 14 00:50:31.055009 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 00:50:31.055174 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 14 00:50:31.055336 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 14 00:50:31.055522 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 00:50:31.055687 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 14 00:50:31.055852 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 14 00:50:31.057761 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 00:50:31.058979 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 14 00:50:31.059170 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 14 00:50:31.059350 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 00:50:31.059556 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 14 00:50:31.059745 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 14 00:50:31.059918 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 00:50:31.060102 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 14 00:50:31.060280 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 14 00:50:31.060475 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 00:50:31.060496 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 14 00:50:31.060510 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 14 00:50:31.060523 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 14 00:50:31.060544 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 14 00:50:31.060557 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 14 00:50:31.060570 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 14 00:50:31.060583 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 14 00:50:31.060595 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 14 00:50:31.060608 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 14 00:50:31.060621 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 14 00:50:31.060634 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 14 00:50:31.060646 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 14 00:50:31.060664 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 14 00:50:31.060677 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 14 00:50:31.060690 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 14 00:50:31.060703 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 14 00:50:31.060715 kernel: iommu: Default domain type: Translated Feb 14 00:50:31.060740 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 14 00:50:31.060752 kernel: PCI: Using ACPI for IRQ routing Feb 14 00:50:31.060764 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 14 00:50:31.060776 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 14 00:50:31.060805 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 14 00:50:31.063011 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 14 00:50:31.063194 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 14 00:50:31.063376 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 14 00:50:31.063397 kernel: vgaarb: loaded Feb 14 00:50:31.063410 kernel: clocksource: Switched to clocksource kvm-clock Feb 14 00:50:31.063435 kernel: VFS: Disk quotas dquot_6.6.0 Feb 14 00:50:31.063462 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 14 00:50:31.063495 kernel: pnp: PnP ACPI init Feb 14 00:50:31.063668 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 14 00:50:31.063690 kernel: pnp: PnP ACPI: found 5 devices Feb 14 00:50:31.063704 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 14 00:50:31.063717 kernel: NET: Registered PF_INET protocol family Feb 14 00:50:31.063730 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 14 00:50:31.063742 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 14 00:50:31.063756 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 14 00:50:31.063768 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 14 00:50:31.063789 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 14 00:50:31.063802 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 14 00:50:31.063815 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 14 00:50:31.063828 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 14 00:50:31.063841 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 14 00:50:31.063854 kernel: NET: Registered PF_XDP protocol family Feb 14 00:50:31.065067 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 14 00:50:31.065243 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:50:31.065420 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:50:31.065601 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 14 00:50:31.065778 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 14 00:50:31.065956 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 14 00:50:31.067155 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 14 00:50:31.067323 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 14 00:50:31.067517 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 14 00:50:31.067684 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 14 00:50:31.067860 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 14 00:50:31.068066 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 14 00:50:31.068229 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 14 00:50:31.068390 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 14 00:50:31.068568 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 14 00:50:31.068739 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 14 00:50:31.070252 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 14 00:50:31.070471 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 14 00:50:31.070646 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 14 00:50:31.070840 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 14 00:50:31.071044 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 14 00:50:31.071209 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 00:50:31.071372 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 14 00:50:31.071551 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 14 00:50:31.071741 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 14 00:50:31.071906 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 00:50:31.072116 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 14 00:50:31.072280 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 14 00:50:31.072453 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 14 00:50:31.072629 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 00:50:31.072801 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 14 00:50:31.072982 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 14 00:50:31.073148 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 14 00:50:31.073313 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 00:50:31.073492 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 14 00:50:31.073678 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 14 00:50:31.073850 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 14 00:50:31.074047 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 00:50:31.074214 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 14 00:50:31.074389 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 14 00:50:31.074569 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 14 00:50:31.074749 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 00:50:31.074929 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 14 00:50:31.075166 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 14 00:50:31.075340 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 14 00:50:31.075520 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 00:50:31.075688 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 14 00:50:31.075862 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 14 00:50:31.076114 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 14 00:50:31.076291 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 00:50:31.076466 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 14 00:50:31.076621 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 14 00:50:31.076779 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 14 00:50:31.076949 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 14 00:50:31.077148 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 14 00:50:31.077309 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 14 00:50:31.077495 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 14 00:50:31.077655 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 14 00:50:31.077811 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 00:50:31.078025 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 14 00:50:31.078198 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 14 00:50:31.078358 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 14 00:50:31.078528 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 00:50:31.078700 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 14 00:50:31.078858 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 14 00:50:31.079100 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 00:50:31.079315 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 14 00:50:31.079485 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 14 00:50:31.079642 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 00:50:31.079978 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 14 00:50:31.080156 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 14 00:50:31.081268 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 00:50:31.081475 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 14 00:50:31.081644 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 14 00:50:31.081799 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 00:50:31.083003 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 14 00:50:31.083168 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 14 00:50:31.083324 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 00:50:31.083505 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 14 00:50:31.083663 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 14 00:50:31.083840 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 00:50:31.083861 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 14 00:50:31.083876 kernel: PCI: CLS 0 bytes, default 64 Feb 14 00:50:31.083889 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 14 00:50:31.083916 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 14 00:50:31.083929 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 14 00:50:31.083943 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 14 00:50:31.083957 kernel: Initialise system trusted keyrings Feb 14 00:50:31.085009 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 14 00:50:31.085024 kernel: Key type asymmetric registered Feb 14 00:50:31.085037 kernel: Asymmetric key parser 'x509' registered Feb 14 00:50:31.085051 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 14 00:50:31.085064 kernel: io scheduler mq-deadline registered Feb 14 00:50:31.085077 kernel: io scheduler kyber registered Feb 14 00:50:31.085091 kernel: io scheduler bfq registered Feb 14 00:50:31.085271 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 14 00:50:31.085452 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 14 00:50:31.085633 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:50:31.085803 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 14 00:50:31.087160 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 14 00:50:31.087333 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:50:31.087517 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 14 00:50:31.087683 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 14 00:50:31.087864 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:50:31.090105 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 14 00:50:31.090291 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 14 00:50:31.090473 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:50:31.090655 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 14 00:50:31.090825 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 14 00:50:31.091022 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:50:31.091196 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 14 00:50:31.091363 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 14 00:50:31.091545 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:50:31.091719 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 14 00:50:31.091886 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 14 00:50:31.094116 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:50:31.094295 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 14 00:50:31.094477 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 14 00:50:31.094648 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:50:31.094670 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 14 00:50:31.094686 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 14 00:50:31.094708 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 14 00:50:31.094722 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 14 00:50:31.094736 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 14 00:50:31.094754 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 14 00:50:31.094768 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 14 00:50:31.094782 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 14 00:50:31.094796 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 14 00:50:31.095019 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 14 00:50:31.095190 kernel: rtc_cmos 00:03: registered as rtc0 Feb 14 00:50:31.095346 kernel: rtc_cmos 00:03: setting system clock to 2025-02-14T00:50:30 UTC (1739494230) Feb 14 00:50:31.095515 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 14 00:50:31.095535 kernel: intel_pstate: CPU model not supported Feb 14 00:50:31.095549 kernel: NET: Registered PF_INET6 protocol family Feb 14 00:50:31.095563 kernel: Segment Routing with IPv6 Feb 14 00:50:31.095577 kernel: In-situ OAM (IOAM) with IPv6 Feb 14 00:50:31.095590 kernel: NET: Registered PF_PACKET protocol family Feb 14 00:50:31.095604 kernel: Key type dns_resolver registered Feb 14 00:50:31.095625 kernel: IPI shorthand broadcast: enabled Feb 14 00:50:31.095639 kernel: sched_clock: Marking stable (1364003585, 239177085)->(1726427512, -123246842) Feb 14 00:50:31.095653 kernel: registered taskstats version 1 Feb 14 00:50:31.095666 kernel: Loading compiled-in X.509 certificates Feb 14 00:50:31.095680 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 14 00:50:31.095693 kernel: Key type .fscrypt registered Feb 14 00:50:31.095706 kernel: Key type fscrypt-provisioning registered Feb 14 00:50:31.095719 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 14 00:50:31.095738 kernel: ima: Allocated hash algorithm: sha1 Feb 14 00:50:31.095751 kernel: ima: No architecture policies found Feb 14 00:50:31.095765 kernel: clk: Disabling unused clocks Feb 14 00:50:31.095778 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 14 00:50:31.095791 kernel: Write protecting the kernel read-only data: 36864k Feb 14 00:50:31.095805 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 14 00:50:31.095819 kernel: Run /init as init process Feb 14 00:50:31.095832 kernel: with arguments: Feb 14 00:50:31.095846 kernel: /init Feb 14 00:50:31.095858 kernel: with environment: Feb 14 00:50:31.095877 kernel: HOME=/ Feb 14 00:50:31.095890 kernel: TERM=linux Feb 14 00:50:31.095903 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 14 00:50:31.095919 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 00:50:31.097014 systemd[1]: Detected virtualization kvm. Feb 14 00:50:31.097037 systemd[1]: Detected architecture x86-64. Feb 14 00:50:31.097051 systemd[1]: Running in initrd. Feb 14 00:50:31.097073 systemd[1]: No hostname configured, using default hostname. Feb 14 00:50:31.097087 systemd[1]: Hostname set to . Feb 14 00:50:31.097102 systemd[1]: Initializing machine ID from VM UUID. Feb 14 00:50:31.097117 systemd[1]: Queued start job for default target initrd.target. Feb 14 00:50:31.097131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 00:50:31.097145 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 00:50:31.097160 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 14 00:50:31.097175 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 00:50:31.097194 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 14 00:50:31.097209 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 14 00:50:31.097225 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 14 00:50:31.097240 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 14 00:50:31.097255 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 00:50:31.097269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 00:50:31.097284 systemd[1]: Reached target paths.target - Path Units. Feb 14 00:50:31.097303 systemd[1]: Reached target slices.target - Slice Units. Feb 14 00:50:31.097317 systemd[1]: Reached target swap.target - Swaps. Feb 14 00:50:31.097331 systemd[1]: Reached target timers.target - Timer Units. Feb 14 00:50:31.097346 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 00:50:31.097360 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 00:50:31.097375 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 14 00:50:31.097389 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 14 00:50:31.097404 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 00:50:31.097419 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 00:50:31.097439 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 00:50:31.097465 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 00:50:31.097479 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 14 00:50:31.097494 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 00:50:31.097514 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 14 00:50:31.097529 systemd[1]: Starting systemd-fsck-usr.service... Feb 14 00:50:31.097543 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 00:50:31.097558 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 00:50:31.097577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:50:31.097592 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 14 00:50:31.097656 systemd-journald[201]: Collecting audit messages is disabled. Feb 14 00:50:31.097690 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 00:50:31.097711 systemd[1]: Finished systemd-fsck-usr.service. Feb 14 00:50:31.097727 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 00:50:31.097742 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 00:50:31.097757 systemd-journald[201]: Journal started Feb 14 00:50:31.097790 systemd-journald[201]: Runtime Journal (/run/log/journal/c6db6c6ceb774ab6a15b3847c50518b7) is 4.7M, max 38.0M, 33.2M free. Feb 14 00:50:31.099012 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 00:50:31.045660 systemd-modules-load[202]: Inserted module 'overlay' Feb 14 00:50:31.160848 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 14 00:50:31.160883 kernel: Bridge firewalling registered Feb 14 00:50:31.104144 systemd-modules-load[202]: Inserted module 'br_netfilter' Feb 14 00:50:31.171009 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 00:50:31.171878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 00:50:31.178261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:50:31.180481 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 00:50:31.190183 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:50:31.192145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 00:50:31.197199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 00:50:31.212014 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:50:31.224001 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:50:31.225266 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 00:50:31.232266 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 14 00:50:31.243150 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 00:50:31.255722 dracut-cmdline[234]: dracut-dracut-053 Feb 14 00:50:31.259614 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 00:50:31.294414 systemd-resolved[235]: Positive Trust Anchors: Feb 14 00:50:31.295548 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 00:50:31.295598 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 00:50:31.304116 systemd-resolved[235]: Defaulting to hostname 'linux'. Feb 14 00:50:31.306040 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 00:50:31.307235 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 00:50:31.359042 kernel: SCSI subsystem initialized Feb 14 00:50:31.369961 kernel: Loading iSCSI transport class v2.0-870. Feb 14 00:50:31.383965 kernel: iscsi: registered transport (tcp) Feb 14 00:50:31.410065 kernel: iscsi: registered transport (qla4xxx) Feb 14 00:50:31.410167 kernel: QLogic iSCSI HBA Driver Feb 14 00:50:31.464659 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 14 00:50:31.470137 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 14 00:50:31.510151 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 14 00:50:31.510252 kernel: device-mapper: uevent: version 1.0.3 Feb 14 00:50:31.513973 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 14 00:50:31.561994 kernel: raid6: sse2x4 gen() 13904 MB/s Feb 14 00:50:31.579005 kernel: raid6: sse2x2 gen() 9580 MB/s Feb 14 00:50:31.597562 kernel: raid6: sse2x1 gen() 10101 MB/s Feb 14 00:50:31.597603 kernel: raid6: using algorithm sse2x4 gen() 13904 MB/s Feb 14 00:50:31.616611 kernel: raid6: .... xor() 7773 MB/s, rmw enabled Feb 14 00:50:31.616670 kernel: raid6: using ssse3x2 recovery algorithm Feb 14 00:50:31.642979 kernel: xor: automatically using best checksumming function avx Feb 14 00:50:31.841016 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 14 00:50:31.856931 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 14 00:50:31.866307 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 00:50:31.897121 systemd-udevd[419]: Using default interface naming scheme 'v255'. Feb 14 00:50:31.904552 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 00:50:31.914225 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 14 00:50:31.933481 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Feb 14 00:50:31.975558 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 00:50:31.982154 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 00:50:32.094012 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 00:50:32.101129 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 14 00:50:32.131763 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 14 00:50:32.134620 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 00:50:32.136228 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 00:50:32.137610 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 00:50:32.147105 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 14 00:50:32.165047 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 14 00:50:32.220962 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 14 00:50:32.295104 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 14 00:50:32.295317 kernel: cryptd: max_cpu_qlen set to 1000 Feb 14 00:50:32.295340 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 14 00:50:32.295363 kernel: GPT:17805311 != 125829119 Feb 14 00:50:32.295393 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 14 00:50:32.295411 kernel: GPT:17805311 != 125829119 Feb 14 00:50:32.295441 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 14 00:50:32.295460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 00:50:32.295485 kernel: libata version 3.00 loaded. Feb 14 00:50:32.295504 kernel: ACPI: bus type USB registered Feb 14 00:50:32.295521 kernel: AVX version of gcm_enc/dec engaged. Feb 14 00:50:32.295538 kernel: usbcore: registered new interface driver usbfs Feb 14 00:50:32.295556 kernel: usbcore: registered new interface driver hub Feb 14 00:50:32.295573 kernel: AES CTR mode by8 optimization enabled Feb 14 00:50:32.280237 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 00:50:32.280418 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:50:32.302342 kernel: usbcore: registered new device driver usb Feb 14 00:50:32.281989 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:50:32.282981 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 00:50:32.283245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:50:32.284318 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:50:32.299659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:50:32.344200 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 14 00:50:32.351021 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 14 00:50:32.351261 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 14 00:50:32.351488 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 14 00:50:32.351697 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 14 00:50:32.351911 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 14 00:50:32.352137 kernel: hub 1-0:1.0: USB hub found Feb 14 00:50:32.352360 kernel: hub 1-0:1.0: 4 ports detected Feb 14 00:50:32.352582 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 14 00:50:32.352799 kernel: hub 2-0:1.0: USB hub found Feb 14 00:50:32.353606 kernel: hub 2-0:1.0: 4 ports detected Feb 14 00:50:32.389010 kernel: ahci 0000:00:1f.2: version 3.0 Feb 14 00:50:32.423272 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 14 00:50:32.423305 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Feb 14 00:50:32.423325 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 14 00:50:32.423570 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 14 00:50:32.423773 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (473) Feb 14 00:50:32.423796 kernel: scsi host0: ahci Feb 14 00:50:32.424045 kernel: scsi host1: ahci Feb 14 00:50:32.424266 kernel: scsi host2: ahci Feb 14 00:50:32.424492 kernel: scsi host3: ahci Feb 14 00:50:32.424697 kernel: scsi host4: ahci Feb 14 00:50:32.424889 kernel: scsi host5: ahci Feb 14 00:50:32.426183 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Feb 14 00:50:32.426209 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Feb 14 00:50:32.426237 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Feb 14 00:50:32.426256 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Feb 14 00:50:32.426273 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Feb 14 00:50:32.426291 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Feb 14 00:50:32.406223 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 14 00:50:32.489306 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:50:32.502788 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 14 00:50:32.509041 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 14 00:50:32.509914 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 14 00:50:32.517857 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 14 00:50:32.535367 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 14 00:50:32.540428 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:50:32.543967 disk-uuid[559]: Primary Header is updated. Feb 14 00:50:32.543967 disk-uuid[559]: Secondary Entries is updated. Feb 14 00:50:32.543967 disk-uuid[559]: Secondary Header is updated. Feb 14 00:50:32.551986 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 00:50:32.559506 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 00:50:32.585698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:50:32.590055 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 14 00:50:32.732660 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 14 00:50:32.732736 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 14 00:50:32.732756 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 14 00:50:32.732774 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 14 00:50:32.737970 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 14 00:50:32.738008 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 14 00:50:32.759964 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 14 00:50:32.766979 kernel: usbcore: registered new interface driver usbhid Feb 14 00:50:32.767024 kernel: usbhid: USB HID core driver Feb 14 00:50:32.775394 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 14 00:50:32.775445 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 14 00:50:33.570057 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 00:50:33.570658 disk-uuid[560]: The operation has completed successfully. Feb 14 00:50:33.621087 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 14 00:50:33.621286 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 14 00:50:33.647186 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 14 00:50:33.652328 sh[583]: Success Feb 14 00:50:33.671073 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 14 00:50:33.737382 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 14 00:50:33.755104 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 14 00:50:33.759020 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 14 00:50:33.783598 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 14 00:50:33.783676 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 14 00:50:33.787416 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 14 00:50:33.787457 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 14 00:50:33.789121 kernel: BTRFS info (device dm-0): using free space tree Feb 14 00:50:33.799846 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 14 00:50:33.801522 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 14 00:50:33.809183 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 14 00:50:33.811961 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 14 00:50:33.831074 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 00:50:33.831135 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 00:50:33.831157 kernel: BTRFS info (device vda6): using free space tree Feb 14 00:50:33.837965 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 00:50:33.852540 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 14 00:50:33.855733 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 00:50:33.861719 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 14 00:50:33.870188 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 14 00:50:33.959961 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 00:50:33.973507 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 00:50:34.008713 systemd-networkd[767]: lo: Link UP Feb 14 00:50:34.008727 systemd-networkd[767]: lo: Gained carrier Feb 14 00:50:34.011621 systemd-networkd[767]: Enumeration completed Feb 14 00:50:34.012158 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 00:50:34.013201 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 00:50:34.013207 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:50:34.014906 systemd[1]: Reached target network.target - Network. Feb 14 00:50:34.016975 systemd-networkd[767]: eth0: Link UP Feb 14 00:50:34.016982 systemd-networkd[767]: eth0: Gained carrier Feb 14 00:50:34.017008 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 00:50:34.032885 ignition[678]: Ignition 2.19.0 Feb 14 00:50:34.033261 ignition[678]: Stage: fetch-offline Feb 14 00:50:34.033361 ignition[678]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:50:34.036076 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 00:50:34.033397 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:50:34.033607 ignition[678]: parsed url from cmdline: "" Feb 14 00:50:34.033614 ignition[678]: no config URL provided Feb 14 00:50:34.033624 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 00:50:34.033647 ignition[678]: no config at "/usr/lib/ignition/user.ign" Feb 14 00:50:34.033657 ignition[678]: failed to fetch config: resource requires networking Feb 14 00:50:34.040355 systemd-networkd[767]: eth0: DHCPv4 address 10.230.17.110/30, gateway 10.230.17.109 acquired from 10.230.17.109 Feb 14 00:50:34.034560 ignition[678]: Ignition finished successfully Feb 14 00:50:34.050354 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 14 00:50:34.071795 ignition[776]: Ignition 2.19.0 Feb 14 00:50:34.071818 ignition[776]: Stage: fetch Feb 14 00:50:34.072098 ignition[776]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:50:34.072118 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:50:34.072273 ignition[776]: parsed url from cmdline: "" Feb 14 00:50:34.072280 ignition[776]: no config URL provided Feb 14 00:50:34.072290 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 00:50:34.072306 ignition[776]: no config at "/usr/lib/ignition/user.ign" Feb 14 00:50:34.072532 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 14 00:50:34.072677 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 14 00:50:34.072738 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 14 00:50:34.089779 ignition[776]: GET result: OK Feb 14 00:50:34.090004 ignition[776]: parsing config with SHA512: 50772e79d9872792e593415d486b02dd6e635721181307f75de3ee347de3286a1de79b19d28a9bb146dd1140145992eadb4a9f39b2e5a17f0701ffa0dcdf3726 Feb 14 00:50:34.096352 unknown[776]: fetched base config from "system" Feb 14 00:50:34.096397 unknown[776]: fetched base config from "system" Feb 14 00:50:34.096837 ignition[776]: fetch: fetch complete Feb 14 00:50:34.096409 unknown[776]: fetched user config from "openstack" Feb 14 00:50:34.096846 ignition[776]: fetch: fetch passed Feb 14 00:50:34.098754 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 14 00:50:34.096909 ignition[776]: Ignition finished successfully Feb 14 00:50:34.108335 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 14 00:50:34.128794 ignition[782]: Ignition 2.19.0 Feb 14 00:50:34.128808 ignition[782]: Stage: kargs Feb 14 00:50:34.129081 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:50:34.131537 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 14 00:50:34.129101 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:50:34.130244 ignition[782]: kargs: kargs passed Feb 14 00:50:34.130319 ignition[782]: Ignition finished successfully Feb 14 00:50:34.142213 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 14 00:50:34.164738 ignition[790]: Ignition 2.19.0 Feb 14 00:50:34.164762 ignition[790]: Stage: disks Feb 14 00:50:34.165062 ignition[790]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:50:34.167573 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 14 00:50:34.165083 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:50:34.169818 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 14 00:50:34.166245 ignition[790]: disks: disks passed Feb 14 00:50:34.171533 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 14 00:50:34.166327 ignition[790]: Ignition finished successfully Feb 14 00:50:34.173202 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 00:50:34.174502 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 00:50:34.176148 systemd[1]: Reached target basic.target - Basic System. Feb 14 00:50:34.191307 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 14 00:50:34.215315 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 14 00:50:34.218267 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 14 00:50:34.224524 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 14 00:50:34.348459 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 14 00:50:34.349529 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 14 00:50:34.351065 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 14 00:50:34.357040 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 00:50:34.369183 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 14 00:50:34.372492 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 14 00:50:34.375273 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 14 00:50:34.377001 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 14 00:50:34.377097 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 00:50:34.383899 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 14 00:50:34.392776 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Feb 14 00:50:34.392816 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 00:50:34.392836 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 00:50:34.392865 kernel: BTRFS info (device vda6): using free space tree Feb 14 00:50:34.397057 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 00:50:34.402225 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 14 00:50:34.413070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 00:50:34.482209 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Feb 14 00:50:34.490619 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Feb 14 00:50:34.500250 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Feb 14 00:50:34.506711 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Feb 14 00:50:34.613007 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 14 00:50:34.619096 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 14 00:50:34.621143 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 14 00:50:34.634975 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 00:50:34.662441 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 14 00:50:34.674819 ignition[925]: INFO : Ignition 2.19.0 Feb 14 00:50:34.675983 ignition[925]: INFO : Stage: mount Feb 14 00:50:34.677211 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:50:34.677211 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:50:34.680514 ignition[925]: INFO : mount: mount passed Feb 14 00:50:34.681284 ignition[925]: INFO : Ignition finished successfully Feb 14 00:50:34.681597 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 14 00:50:34.782009 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 14 00:50:35.976733 systemd-networkd[767]: eth0: Gained IPv6LL Feb 14 00:50:36.143699 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:179:845b:24:19ff:fee6:116e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:845b:24:19ff:fee6:116e/64 assigned by NDisc. Feb 14 00:50:36.143720 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 14 00:50:41.555343 coreos-metadata[808]: Feb 14 00:50:41.555 WARN failed to locate config-drive, using the metadata service API instead Feb 14 00:50:41.578518 coreos-metadata[808]: Feb 14 00:50:41.578 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 14 00:50:41.592979 coreos-metadata[808]: Feb 14 00:50:41.592 INFO Fetch successful Feb 14 00:50:41.594106 coreos-metadata[808]: Feb 14 00:50:41.594 INFO wrote hostname srv-2zttm.gb1.brightbox.com to /sysroot/etc/hostname Feb 14 00:50:41.596338 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 14 00:50:41.597188 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 14 00:50:41.611102 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 14 00:50:41.625199 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 00:50:41.640925 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Feb 14 00:50:41.641004 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 00:50:41.643324 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 00:50:41.643375 kernel: BTRFS info (device vda6): using free space tree Feb 14 00:50:41.648969 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 00:50:41.652110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 00:50:41.680801 ignition[960]: INFO : Ignition 2.19.0 Feb 14 00:50:41.680801 ignition[960]: INFO : Stage: files Feb 14 00:50:41.682685 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:50:41.682685 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:50:41.682685 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Feb 14 00:50:41.685829 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 14 00:50:41.685829 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 14 00:50:41.687876 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 14 00:50:41.688883 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 14 00:50:41.688883 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 14 00:50:41.688518 unknown[960]: wrote ssh authorized keys file for user: core Feb 14 00:50:41.692124 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 14 00:50:41.692124 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 14 00:50:41.847859 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 14 00:50:42.106099 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 14 00:50:42.107995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 14 00:50:42.107995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 14 00:50:42.107995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 14 00:50:42.107995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 14 00:50:42.107995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 00:50:42.107995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 00:50:42.107995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 00:50:42.107995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 00:50:42.123740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 00:50:42.123740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 00:50:42.123740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 14 00:50:42.123740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 14 00:50:42.123740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 14 00:50:42.123740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 14 00:50:42.677296 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 14 00:50:45.904356 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 14 00:50:45.904356 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 14 00:50:45.909744 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 00:50:45.909744 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 00:50:45.909744 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 14 00:50:45.909744 ignition[960]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 14 00:50:45.909744 ignition[960]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 14 00:50:45.909744 ignition[960]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 14 00:50:45.909744 ignition[960]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 14 00:50:45.909744 ignition[960]: INFO : files: files passed Feb 14 00:50:45.909744 ignition[960]: INFO : Ignition finished successfully Feb 14 00:50:45.909262 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 14 00:50:45.919419 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 14 00:50:45.930210 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 14 00:50:45.936215 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 14 00:50:45.937141 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 14 00:50:45.947722 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:50:45.947722 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:50:45.951051 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:50:45.953615 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 00:50:45.956383 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 14 00:50:45.962190 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 14 00:50:46.012112 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 14 00:50:46.012320 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 14 00:50:46.014242 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 14 00:50:46.015718 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 14 00:50:46.016520 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 14 00:50:46.025407 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 14 00:50:46.043547 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 00:50:46.052438 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 14 00:50:46.071619 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 14 00:50:46.072819 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 00:50:46.074538 systemd[1]: Stopped target timers.target - Timer Units. Feb 14 00:50:46.076095 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 14 00:50:46.076314 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 00:50:46.078260 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 14 00:50:46.079431 systemd[1]: Stopped target basic.target - Basic System. Feb 14 00:50:46.080463 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 14 00:50:46.082239 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 00:50:46.083818 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 14 00:50:46.084869 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 14 00:50:46.086332 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 00:50:46.088076 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 14 00:50:46.089741 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 14 00:50:46.091274 systemd[1]: Stopped target swap.target - Swaps. Feb 14 00:50:46.092624 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 14 00:50:46.092860 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 14 00:50:46.094975 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 14 00:50:46.096058 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 00:50:46.097418 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 14 00:50:46.097604 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 00:50:46.098990 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 14 00:50:46.099176 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 14 00:50:46.100981 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 14 00:50:46.101234 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 00:50:46.103065 systemd[1]: ignition-files.service: Deactivated successfully. Feb 14 00:50:46.103283 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 14 00:50:46.116831 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 14 00:50:46.117670 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 14 00:50:46.117952 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 00:50:46.121242 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 14 00:50:46.123046 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 14 00:50:46.123313 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 00:50:46.128011 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 14 00:50:46.128398 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 00:50:46.142472 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 14 00:50:46.142912 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 14 00:50:46.146618 ignition[1012]: INFO : Ignition 2.19.0 Feb 14 00:50:46.146618 ignition[1012]: INFO : Stage: umount Feb 14 00:50:46.146618 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:50:46.146618 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:50:46.150922 ignition[1012]: INFO : umount: umount passed Feb 14 00:50:46.150922 ignition[1012]: INFO : Ignition finished successfully Feb 14 00:50:46.151980 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 14 00:50:46.152146 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 14 00:50:46.154299 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 14 00:50:46.154460 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 14 00:50:46.157326 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 14 00:50:46.157399 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 14 00:50:46.158569 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 14 00:50:46.158637 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 14 00:50:46.159368 systemd[1]: Stopped target network.target - Network. Feb 14 00:50:46.162074 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 14 00:50:46.162184 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 00:50:46.163306 systemd[1]: Stopped target paths.target - Path Units. Feb 14 00:50:46.163926 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 14 00:50:46.169016 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 00:50:46.169882 systemd[1]: Stopped target slices.target - Slice Units. Feb 14 00:50:46.171397 systemd[1]: Stopped target sockets.target - Socket Units. Feb 14 00:50:46.173218 systemd[1]: iscsid.socket: Deactivated successfully. Feb 14 00:50:46.173293 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 00:50:46.174536 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 14 00:50:46.174604 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 00:50:46.176012 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 14 00:50:46.176112 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 14 00:50:46.177416 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 14 00:50:46.177489 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 14 00:50:46.179382 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 14 00:50:46.181872 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 14 00:50:46.184914 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 14 00:50:46.185118 systemd-networkd[767]: eth0: DHCPv6 lease lost Feb 14 00:50:46.188680 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 14 00:50:46.188817 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 14 00:50:46.190562 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 14 00:50:46.190760 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 14 00:50:46.194780 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 14 00:50:46.195451 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 14 00:50:46.196700 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 14 00:50:46.196774 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 14 00:50:46.204062 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 14 00:50:46.205443 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 14 00:50:46.205543 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 00:50:46.210121 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 00:50:46.212072 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 14 00:50:46.212283 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 14 00:50:46.227525 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 14 00:50:46.228910 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 00:50:46.230617 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 14 00:50:46.230769 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 14 00:50:46.234205 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 14 00:50:46.234299 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 14 00:50:46.236038 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 14 00:50:46.236099 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 00:50:46.238575 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 14 00:50:46.238655 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 14 00:50:46.241124 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 14 00:50:46.241213 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 14 00:50:46.242554 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 00:50:46.242629 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:50:46.261267 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 14 00:50:46.264341 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 14 00:50:46.264425 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:50:46.266052 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 14 00:50:46.266122 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 14 00:50:46.267440 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 14 00:50:46.267508 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 00:50:46.272102 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 14 00:50:46.272190 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 00:50:46.273101 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 00:50:46.273182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:50:46.275409 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 14 00:50:46.275555 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 14 00:50:46.277171 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 14 00:50:46.285179 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 14 00:50:46.296419 systemd[1]: Switching root. Feb 14 00:50:46.337003 systemd-journald[201]: Journal stopped Feb 14 00:50:47.778544 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Feb 14 00:50:47.778664 kernel: SELinux: policy capability network_peer_controls=1 Feb 14 00:50:47.778692 kernel: SELinux: policy capability open_perms=1 Feb 14 00:50:47.778711 kernel: SELinux: policy capability extended_socket_class=1 Feb 14 00:50:47.778730 kernel: SELinux: policy capability always_check_network=0 Feb 14 00:50:47.778779 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 14 00:50:47.778816 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 14 00:50:47.778837 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 14 00:50:47.778870 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 14 00:50:47.778891 kernel: audit: type=1403 audit(1739494246.560:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 14 00:50:47.778918 systemd[1]: Successfully loaded SELinux policy in 53.924ms. Feb 14 00:50:47.778959 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.382ms. Feb 14 00:50:47.778990 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 00:50:47.779013 systemd[1]: Detected virtualization kvm. Feb 14 00:50:47.779033 systemd[1]: Detected architecture x86-64. Feb 14 00:50:47.779052 systemd[1]: Detected first boot. Feb 14 00:50:47.779088 systemd[1]: Hostname set to . Feb 14 00:50:47.779110 systemd[1]: Initializing machine ID from VM UUID. Feb 14 00:50:47.779142 zram_generator::config[1054]: No configuration found. Feb 14 00:50:47.779174 systemd[1]: Populated /etc with preset unit settings. Feb 14 00:50:47.779196 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 14 00:50:47.779216 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 14 00:50:47.779236 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 14 00:50:47.779258 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 14 00:50:47.779292 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 14 00:50:47.779315 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 14 00:50:47.779350 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 14 00:50:47.779372 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 14 00:50:47.779392 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 14 00:50:47.779413 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 14 00:50:47.779433 systemd[1]: Created slice user.slice - User and Session Slice. Feb 14 00:50:47.779453 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 00:50:47.779474 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 00:50:47.779511 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 14 00:50:47.779534 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 14 00:50:47.779555 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 14 00:50:47.779576 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 00:50:47.779603 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 14 00:50:47.779626 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 00:50:47.779647 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 14 00:50:47.779680 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 14 00:50:47.779702 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 14 00:50:47.779724 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 14 00:50:47.779745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 00:50:47.779772 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 00:50:47.779793 systemd[1]: Reached target slices.target - Slice Units. Feb 14 00:50:47.779813 systemd[1]: Reached target swap.target - Swaps. Feb 14 00:50:47.779848 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 14 00:50:47.779882 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 14 00:50:47.779904 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 00:50:47.779925 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 00:50:47.780094 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 00:50:47.780147 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 14 00:50:47.780186 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 14 00:50:47.780230 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 14 00:50:47.780253 systemd[1]: Mounting media.mount - External Media Directory... Feb 14 00:50:47.780274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:50:47.780295 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 14 00:50:47.780315 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 14 00:50:47.780336 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 14 00:50:47.780357 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 14 00:50:47.780378 systemd[1]: Reached target machines.target - Containers. Feb 14 00:50:47.780412 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 14 00:50:47.780435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 00:50:47.780457 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 00:50:47.780477 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 14 00:50:47.780497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 00:50:47.780517 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 00:50:47.780539 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 00:50:47.780560 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 14 00:50:47.780580 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 00:50:47.780614 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 14 00:50:47.780637 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 14 00:50:47.780659 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 14 00:50:47.780679 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 14 00:50:47.780699 systemd[1]: Stopped systemd-fsck-usr.service. Feb 14 00:50:47.780719 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 00:50:47.780739 kernel: fuse: init (API version 7.39) Feb 14 00:50:47.780759 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 00:50:47.780780 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 14 00:50:47.780813 kernel: loop: module loaded Feb 14 00:50:47.780849 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 14 00:50:47.780871 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 00:50:47.780930 systemd-journald[1150]: Collecting audit messages is disabled. Feb 14 00:50:47.780983 systemd[1]: verity-setup.service: Deactivated successfully. Feb 14 00:50:47.781007 systemd[1]: Stopped verity-setup.service. Feb 14 00:50:47.781028 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:50:47.781065 systemd-journald[1150]: Journal started Feb 14 00:50:47.781101 systemd-journald[1150]: Runtime Journal (/run/log/journal/c6db6c6ceb774ab6a15b3847c50518b7) is 4.7M, max 38.0M, 33.2M free. Feb 14 00:50:47.395114 systemd[1]: Queued start job for default target multi-user.target. Feb 14 00:50:47.418405 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 14 00:50:47.419250 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 14 00:50:47.790425 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 14 00:50:47.790481 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 00:50:47.792860 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 14 00:50:47.793771 systemd[1]: Mounted media.mount - External Media Directory. Feb 14 00:50:47.794689 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 14 00:50:47.795674 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 14 00:50:47.796600 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 14 00:50:47.797705 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 14 00:50:47.798814 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 00:50:47.800782 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 14 00:50:47.801219 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 14 00:50:47.802451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 00:50:47.803011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 00:50:47.806510 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 00:50:47.809037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 00:50:47.810268 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 14 00:50:47.810476 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 14 00:50:47.811586 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 00:50:47.811772 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 00:50:47.812874 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 00:50:47.814003 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 14 00:50:47.829273 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 14 00:50:47.829995 kernel: ACPI: bus type drm_connector registered Feb 14 00:50:47.830840 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 00:50:47.831186 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 00:50:47.839099 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 14 00:50:47.846232 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 14 00:50:47.855357 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 14 00:50:47.856333 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 14 00:50:47.856384 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 00:50:47.860647 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 14 00:50:47.871232 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 14 00:50:47.878150 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 14 00:50:47.879167 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 00:50:47.881464 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 14 00:50:47.890171 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 14 00:50:47.893110 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 00:50:47.896202 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 14 00:50:47.898073 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 00:50:47.900803 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 00:50:47.903998 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 14 00:50:47.910153 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 14 00:50:47.918502 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 14 00:50:47.929543 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 14 00:50:47.934042 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 14 00:50:47.966789 systemd-journald[1150]: Time spent on flushing to /var/log/journal/c6db6c6ceb774ab6a15b3847c50518b7 is 125.005ms for 1138 entries. Feb 14 00:50:47.966789 systemd-journald[1150]: System Journal (/var/log/journal/c6db6c6ceb774ab6a15b3847c50518b7) is 8.0M, max 584.8M, 576.8M free. Feb 14 00:50:48.124911 systemd-journald[1150]: Received client request to flush runtime journal. Feb 14 00:50:48.126975 kernel: loop0: detected capacity change from 0 to 140768 Feb 14 00:50:48.128837 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 14 00:50:48.128883 kernel: loop1: detected capacity change from 0 to 142488 Feb 14 00:50:47.989766 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 14 00:50:47.991491 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 14 00:50:48.000300 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 14 00:50:48.053308 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:50:48.121193 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 14 00:50:48.123213 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 14 00:50:48.140967 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 14 00:50:48.144671 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 14 00:50:48.156225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 00:50:48.167864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 00:50:48.182514 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 14 00:50:48.214427 kernel: loop2: detected capacity change from 0 to 8 Feb 14 00:50:48.221854 udevadm[1207]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 14 00:50:48.248232 kernel: loop3: detected capacity change from 0 to 205544 Feb 14 00:50:48.260310 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Feb 14 00:50:48.260339 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Feb 14 00:50:48.276646 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 00:50:48.312991 kernel: loop4: detected capacity change from 0 to 140768 Feb 14 00:50:48.346055 kernel: loop5: detected capacity change from 0 to 142488 Feb 14 00:50:48.373184 kernel: loop6: detected capacity change from 0 to 8 Feb 14 00:50:48.381974 kernel: loop7: detected capacity change from 0 to 205544 Feb 14 00:50:48.399984 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 14 00:50:48.400858 (sd-merge)[1212]: Merged extensions into '/usr'. Feb 14 00:50:48.408009 systemd[1]: Reloading requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Feb 14 00:50:48.409981 systemd[1]: Reloading... Feb 14 00:50:48.612081 zram_generator::config[1238]: No configuration found. Feb 14 00:50:48.699915 ldconfig[1182]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 14 00:50:48.850004 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:50:48.923318 systemd[1]: Reloading finished in 512 ms. Feb 14 00:50:48.952200 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 14 00:50:48.953595 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 14 00:50:48.967285 systemd[1]: Starting ensure-sysext.service... Feb 14 00:50:48.971830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 00:50:48.994195 systemd[1]: Reloading requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Feb 14 00:50:48.994222 systemd[1]: Reloading... Feb 14 00:50:49.004013 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 14 00:50:49.005306 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 14 00:50:49.007106 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 14 00:50:49.007531 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Feb 14 00:50:49.007644 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Feb 14 00:50:49.013073 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 00:50:49.013235 systemd-tmpfiles[1295]: Skipping /boot Feb 14 00:50:49.028128 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 00:50:49.028299 systemd-tmpfiles[1295]: Skipping /boot Feb 14 00:50:49.125045 zram_generator::config[1325]: No configuration found. Feb 14 00:50:49.295356 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:50:49.364008 systemd[1]: Reloading finished in 369 ms. Feb 14 00:50:49.390065 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 14 00:50:49.394500 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 00:50:49.409184 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 00:50:49.419238 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 14 00:50:49.423048 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 14 00:50:49.431205 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 00:50:49.439312 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 00:50:49.444225 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 14 00:50:49.455829 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:50:49.456343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 00:50:49.467408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 00:50:49.472388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 00:50:49.474664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 00:50:49.476128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 00:50:49.476293 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:50:49.480848 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:50:49.481173 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 00:50:49.481404 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 00:50:49.491549 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 14 00:50:49.492448 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:50:49.505062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:50:49.505485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 00:50:49.520559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 00:50:49.522017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 00:50:49.522233 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:50:49.524477 systemd[1]: Finished ensure-sysext.service. Feb 14 00:50:49.529205 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 14 00:50:49.543195 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 14 00:50:49.554002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 00:50:49.554607 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 00:50:49.565814 systemd-udevd[1388]: Using default interface naming scheme 'v255'. Feb 14 00:50:49.600432 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 14 00:50:49.611243 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 14 00:50:49.612636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 00:50:49.615238 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 00:50:49.616612 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 00:50:49.616885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 00:50:49.618380 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 00:50:49.618639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 00:50:49.622695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 00:50:49.622869 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 00:50:49.627867 augenrules[1413]: No rules Feb 14 00:50:49.634117 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 00:50:49.649033 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 00:50:49.662337 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 00:50:49.663579 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 14 00:50:49.675249 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 14 00:50:49.676418 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 14 00:50:49.679993 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 14 00:50:49.850518 systemd-resolved[1384]: Positive Trust Anchors: Feb 14 00:50:49.850554 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 00:50:49.850600 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 00:50:49.858879 systemd-networkd[1424]: lo: Link UP Feb 14 00:50:49.860979 systemd-networkd[1424]: lo: Gained carrier Feb 14 00:50:49.863763 systemd-resolved[1384]: Using system hostname 'srv-2zttm.gb1.brightbox.com'. Feb 14 00:50:49.864587 systemd-networkd[1424]: Enumeration completed Feb 14 00:50:49.864779 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 00:50:49.875213 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 14 00:50:49.876222 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 00:50:49.877675 systemd[1]: Reached target network.target - Network. Feb 14 00:50:49.879041 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 00:50:49.891176 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 14 00:50:49.892413 systemd[1]: Reached target time-set.target - System Time Set. Feb 14 00:50:49.923445 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 14 00:50:49.963984 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1428) Feb 14 00:50:50.041980 kernel: mousedev: PS/2 mouse device common for all mice Feb 14 00:50:50.069255 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 14 00:50:50.071331 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 00:50:50.075523 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 14 00:50:50.072287 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:50:50.076204 systemd-networkd[1424]: eth0: Link UP Feb 14 00:50:50.076492 systemd-networkd[1424]: eth0: Gained carrier Feb 14 00:50:50.076639 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 00:50:50.080539 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 14 00:50:50.089979 kernel: ACPI: button: Power Button [PWRF] Feb 14 00:50:50.092064 systemd-networkd[1424]: eth0: DHCPv4 address 10.230.17.110/30, gateway 10.230.17.109 acquired from 10.230.17.109 Feb 14 00:50:50.095045 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Feb 14 00:50:50.114055 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 14 00:50:50.138973 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 14 00:50:50.153955 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 14 00:50:50.154260 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 14 00:50:50.154880 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 14 00:50:50.240651 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:50:50.910800 systemd-resolved[1384]: Clock change detected. Flushing caches. Feb 14 00:50:50.910835 systemd-timesyncd[1405]: Contacted time server 81.130.79.209:123 (0.flatcar.pool.ntp.org). Feb 14 00:50:50.910967 systemd-timesyncd[1405]: Initial clock synchronization to Fri 2025-02-14 00:50:50.910605 UTC. Feb 14 00:50:50.944813 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 14 00:50:50.973658 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 14 00:50:50.975254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:50:51.003418 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 00:50:51.036227 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 14 00:50:51.038101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 00:50:51.038931 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 00:50:51.039858 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 14 00:50:51.040814 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 14 00:50:51.041955 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 14 00:50:51.043050 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 14 00:50:51.043873 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 14 00:50:51.045100 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 14 00:50:51.045159 systemd[1]: Reached target paths.target - Path Units. Feb 14 00:50:51.045942 systemd[1]: Reached target timers.target - Timer Units. Feb 14 00:50:51.048435 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 14 00:50:51.052132 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 14 00:50:51.058927 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 14 00:50:51.070972 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 14 00:50:51.072917 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 14 00:50:51.074042 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 00:50:51.074937 systemd[1]: Reached target basic.target - Basic System. Feb 14 00:50:51.075779 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 14 00:50:51.075832 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 14 00:50:51.080054 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 00:50:51.082576 systemd[1]: Starting containerd.service - containerd container runtime... Feb 14 00:50:51.097821 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 14 00:50:51.102637 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 14 00:50:51.107579 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 14 00:50:51.118683 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 14 00:50:51.119919 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 14 00:50:51.128188 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 14 00:50:51.130598 jq[1478]: false Feb 14 00:50:51.136618 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 14 00:50:51.145697 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 14 00:50:51.155656 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 14 00:50:51.165988 extend-filesystems[1479]: Found loop4 Feb 14 00:50:51.169478 extend-filesystems[1479]: Found loop5 Feb 14 00:50:51.169478 extend-filesystems[1479]: Found loop6 Feb 14 00:50:51.169478 extend-filesystems[1479]: Found loop7 Feb 14 00:50:51.169478 extend-filesystems[1479]: Found vda Feb 14 00:50:51.168666 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 14 00:50:51.185051 extend-filesystems[1479]: Found vda1 Feb 14 00:50:51.185051 extend-filesystems[1479]: Found vda2 Feb 14 00:50:51.185051 extend-filesystems[1479]: Found vda3 Feb 14 00:50:51.185051 extend-filesystems[1479]: Found usr Feb 14 00:50:51.185051 extend-filesystems[1479]: Found vda4 Feb 14 00:50:51.185051 extend-filesystems[1479]: Found vda6 Feb 14 00:50:51.185051 extend-filesystems[1479]: Found vda7 Feb 14 00:50:51.185051 extend-filesystems[1479]: Found vda9 Feb 14 00:50:51.185051 extend-filesystems[1479]: Checking size of /dev/vda9 Feb 14 00:50:51.172485 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 14 00:50:51.173326 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 14 00:50:51.183286 systemd[1]: Starting update-engine.service - Update Engine... Feb 14 00:50:51.196573 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 14 00:50:51.200456 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 14 00:50:51.210218 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 14 00:50:51.211022 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 14 00:50:51.237417 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1428) Feb 14 00:50:51.241783 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 14 00:50:51.242562 extend-filesystems[1479]: Resized partition /dev/vda9 Feb 14 00:50:51.242892 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 14 00:50:51.264273 extend-filesystems[1505]: resize2fs 1.47.1 (20-May-2024) Feb 14 00:50:51.279867 systemd[1]: motdgen.service: Deactivated successfully. Feb 14 00:50:51.280338 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 14 00:50:51.285416 jq[1496]: true Feb 14 00:50:51.289444 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 14 00:50:51.294010 dbus-daemon[1475]: [system] SELinux support is enabled Feb 14 00:50:51.294257 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 14 00:50:51.298615 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 14 00:50:51.299219 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 14 00:50:51.300131 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 14 00:50:51.300163 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 14 00:50:51.311336 dbus-daemon[1475]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1424 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 14 00:50:51.316586 tar[1499]: linux-amd64/helm Feb 14 00:50:51.325659 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 14 00:50:51.328111 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 14 00:50:51.355459 update_engine[1493]: I20250214 00:50:51.354817 1493 main.cc:92] Flatcar Update Engine starting Feb 14 00:50:51.375533 update_engine[1493]: I20250214 00:50:51.370662 1493 update_check_scheduler.cc:74] Next update check in 7m43s Feb 14 00:50:51.374765 systemd[1]: Started update-engine.service - Update Engine. Feb 14 00:50:51.388820 jq[1513]: true Feb 14 00:50:51.390859 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 14 00:50:51.495078 systemd-logind[1486]: Watching system buttons on /dev/input/event2 (Power Button) Feb 14 00:50:51.495124 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 14 00:50:51.495558 systemd-logind[1486]: New seat seat0. Feb 14 00:50:51.502754 systemd[1]: Started systemd-logind.service - User Login Management. Feb 14 00:50:51.612421 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 14 00:50:51.619512 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Feb 14 00:50:51.617351 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 14 00:50:51.641000 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 14 00:50:51.629796 systemd[1]: Starting sshkeys.service... Feb 14 00:50:51.645307 dbus-daemon[1475]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1515 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 14 00:50:51.641198 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 14 00:50:51.652749 systemd[1]: Starting polkit.service - Authorization Manager... Feb 14 00:50:51.679527 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 14 00:50:51.687210 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 14 00:50:51.695634 extend-filesystems[1505]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 14 00:50:51.695634 extend-filesystems[1505]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 14 00:50:51.695634 extend-filesystems[1505]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 14 00:50:51.711467 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Feb 14 00:50:51.698643 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 14 00:50:51.698953 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 14 00:50:51.728076 polkitd[1539]: Started polkitd version 121 Feb 14 00:50:51.730932 systemd-networkd[1424]: eth0: Gained IPv6LL Feb 14 00:50:51.740754 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 14 00:50:51.743133 systemd[1]: Reached target network-online.target - Network is Online. Feb 14 00:50:51.755582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:50:51.763934 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 14 00:50:51.771348 polkitd[1539]: Loading rules from directory /etc/polkit-1/rules.d Feb 14 00:50:51.780239 polkitd[1539]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 14 00:50:51.783762 polkitd[1539]: Finished loading, compiling and executing 2 rules Feb 14 00:50:51.787585 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 14 00:50:51.787829 systemd[1]: Started polkit.service - Authorization Manager. Feb 14 00:50:51.791819 polkitd[1539]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 14 00:50:51.858640 systemd-hostnamed[1515]: Hostname set to (static) Feb 14 00:50:51.901838 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 14 00:50:51.934594 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 14 00:50:52.041769 containerd[1509]: time="2025-02-14T00:50:52.040983579Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 14 00:50:52.162518 containerd[1509]: time="2025-02-14T00:50:52.161408916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:50:52.171645 containerd[1509]: time="2025-02-14T00:50:52.171167713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:50:52.171645 containerd[1509]: time="2025-02-14T00:50:52.171247240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 14 00:50:52.171645 containerd[1509]: time="2025-02-14T00:50:52.171279255Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 14 00:50:52.171645 containerd[1509]: time="2025-02-14T00:50:52.171646052Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 14 00:50:52.171866 containerd[1509]: time="2025-02-14T00:50:52.171698361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 14 00:50:52.171866 containerd[1509]: time="2025-02-14T00:50:52.171827369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:50:52.171866 containerd[1509]: time="2025-02-14T00:50:52.171851732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:50:52.172161 containerd[1509]: time="2025-02-14T00:50:52.172124085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:50:52.172242 containerd[1509]: time="2025-02-14T00:50:52.172167462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 14 00:50:52.172242 containerd[1509]: time="2025-02-14T00:50:52.172195650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:50:52.172242 containerd[1509]: time="2025-02-14T00:50:52.172212944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 14 00:50:52.173478 containerd[1509]: time="2025-02-14T00:50:52.172340419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:50:52.176215 containerd[1509]: time="2025-02-14T00:50:52.175602654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:50:52.176215 containerd[1509]: time="2025-02-14T00:50:52.175837607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:50:52.176215 containerd[1509]: time="2025-02-14T00:50:52.175866816Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 14 00:50:52.176215 containerd[1509]: time="2025-02-14T00:50:52.176012070Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 14 00:50:52.176215 containerd[1509]: time="2025-02-14T00:50:52.176137096Z" level=info msg="metadata content store policy set" policy=shared Feb 14 00:50:52.184087 containerd[1509]: time="2025-02-14T00:50:52.183790688Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 14 00:50:52.184087 containerd[1509]: time="2025-02-14T00:50:52.183922738Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 14 00:50:52.184087 containerd[1509]: time="2025-02-14T00:50:52.183955655Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 14 00:50:52.184087 containerd[1509]: time="2025-02-14T00:50:52.183986996Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 14 00:50:52.184087 containerd[1509]: time="2025-02-14T00:50:52.184043664Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 14 00:50:52.185652 containerd[1509]: time="2025-02-14T00:50:52.185613782Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 14 00:50:52.186136 containerd[1509]: time="2025-02-14T00:50:52.186064248Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 14 00:50:52.186304 containerd[1509]: time="2025-02-14T00:50:52.186274506Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 14 00:50:52.186359 containerd[1509]: time="2025-02-14T00:50:52.186310039Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 14 00:50:52.186359 containerd[1509]: time="2025-02-14T00:50:52.186334011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 14 00:50:52.186475 containerd[1509]: time="2025-02-14T00:50:52.186357179Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 14 00:50:52.186475 containerd[1509]: time="2025-02-14T00:50:52.186409861Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 14 00:50:52.186475 containerd[1509]: time="2025-02-14T00:50:52.186441135Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 14 00:50:52.186475 containerd[1509]: time="2025-02-14T00:50:52.186467915Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 14 00:50:52.186617 containerd[1509]: time="2025-02-14T00:50:52.186517006Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 14 00:50:52.186617 containerd[1509]: time="2025-02-14T00:50:52.186546359Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 14 00:50:52.186617 containerd[1509]: time="2025-02-14T00:50:52.186571915Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 14 00:50:52.186617 containerd[1509]: time="2025-02-14T00:50:52.186596289Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186639049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186667710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186689511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186712543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186733353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186755868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186776639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186813776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186839668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186864836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186884649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186912944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186937632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.186963345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 14 00:50:52.187684 containerd[1509]: time="2025-02-14T00:50:52.187017606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.188129 containerd[1509]: time="2025-02-14T00:50:52.187043543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.188129 containerd[1509]: time="2025-02-14T00:50:52.187062986Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 14 00:50:52.188129 containerd[1509]: time="2025-02-14T00:50:52.187145425Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 14 00:50:52.188129 containerd[1509]: time="2025-02-14T00:50:52.187188470Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 14 00:50:52.188129 containerd[1509]: time="2025-02-14T00:50:52.187210275Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 14 00:50:52.188129 containerd[1509]: time="2025-02-14T00:50:52.187231658Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 14 00:50:52.188129 containerd[1509]: time="2025-02-14T00:50:52.187249332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.188129 containerd[1509]: time="2025-02-14T00:50:52.187270188Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 14 00:50:52.188129 containerd[1509]: time="2025-02-14T00:50:52.187297418Z" level=info msg="NRI interface is disabled by configuration." Feb 14 00:50:52.188129 containerd[1509]: time="2025-02-14T00:50:52.187317929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 14 00:50:52.192558 containerd[1509]: time="2025-02-14T00:50:52.191893649Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 14 00:50:52.192558 containerd[1509]: time="2025-02-14T00:50:52.192031910Z" level=info msg="Connect containerd service" Feb 14 00:50:52.192558 containerd[1509]: time="2025-02-14T00:50:52.192131070Z" level=info msg="using legacy CRI server" Feb 14 00:50:52.192558 containerd[1509]: time="2025-02-14T00:50:52.192151378Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 14 00:50:52.192558 containerd[1509]: time="2025-02-14T00:50:52.192365750Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 14 00:50:52.197719 containerd[1509]: time="2025-02-14T00:50:52.195650759Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 14 00:50:52.197719 containerd[1509]: time="2025-02-14T00:50:52.196532878Z" level=info msg="Start subscribing containerd event" Feb 14 00:50:52.197719 containerd[1509]: time="2025-02-14T00:50:52.196628543Z" level=info msg="Start recovering state" Feb 14 00:50:52.197719 containerd[1509]: time="2025-02-14T00:50:52.196778371Z" level=info msg="Start event monitor" Feb 14 00:50:52.197719 containerd[1509]: time="2025-02-14T00:50:52.196810739Z" level=info msg="Start snapshots syncer" Feb 14 00:50:52.197719 containerd[1509]: time="2025-02-14T00:50:52.196829581Z" level=info msg="Start cni network conf syncer for default" Feb 14 00:50:52.197719 containerd[1509]: time="2025-02-14T00:50:52.196843596Z" level=info msg="Start streaming server" Feb 14 00:50:52.210587 containerd[1509]: time="2025-02-14T00:50:52.199286224Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 14 00:50:52.210587 containerd[1509]: time="2025-02-14T00:50:52.199421425Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 14 00:50:52.210587 containerd[1509]: time="2025-02-14T00:50:52.199543266Z" level=info msg="containerd successfully booted in 0.167513s" Feb 14 00:50:52.199722 systemd[1]: Started containerd.service - containerd container runtime. Feb 14 00:50:52.420103 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 14 00:50:52.488310 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 14 00:50:52.501844 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 14 00:50:52.521946 systemd[1]: issuegen.service: Deactivated successfully. Feb 14 00:50:52.522466 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 14 00:50:52.534235 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 14 00:50:52.562841 systemd-networkd[1424]: eth0: Ignoring DHCPv6 address 2a02:1348:179:845b:24:19ff:fee6:116e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:845b:24:19ff:fee6:116e/64 assigned by NDisc. Feb 14 00:50:52.562859 systemd-networkd[1424]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 14 00:50:52.570628 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 14 00:50:52.584970 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 14 00:50:52.595265 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 14 00:50:52.596728 systemd[1]: Reached target getty.target - Login Prompts. Feb 14 00:50:52.646293 tar[1499]: linux-amd64/LICENSE Feb 14 00:50:52.646293 tar[1499]: linux-amd64/README.md Feb 14 00:50:52.667974 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 14 00:50:53.040281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:50:53.050924 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:50:53.663970 kubelet[1599]: E0214 00:50:53.663866 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:50:53.665838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:50:53.666095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:50:53.666700 systemd[1]: kubelet.service: Consumed 1.031s CPU time. Feb 14 00:50:56.572823 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 14 00:50:56.590000 systemd[1]: Started sshd@0-10.230.17.110:22-147.75.109.163:35220.service - OpenSSH per-connection server daemon (147.75.109.163:35220). Feb 14 00:50:57.491770 sshd[1609]: Accepted publickey for core from 147.75.109.163 port 35220 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:50:57.495610 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:50:57.513682 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 14 00:50:57.520872 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 14 00:50:57.525140 systemd-logind[1486]: New session 1 of user core. Feb 14 00:50:57.548629 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 14 00:50:57.563023 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 14 00:50:57.568305 (systemd)[1613]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 14 00:50:57.634464 login[1589]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 14 00:50:57.643077 systemd-logind[1486]: New session 2 of user core. Feb 14 00:50:57.671287 login[1588]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 14 00:50:57.679128 systemd-logind[1486]: New session 3 of user core. Feb 14 00:50:57.759446 systemd[1613]: Queued start job for default target default.target. Feb 14 00:50:57.768332 systemd[1613]: Created slice app.slice - User Application Slice. Feb 14 00:50:57.768379 systemd[1613]: Reached target paths.target - Paths. Feb 14 00:50:57.768443 systemd[1613]: Reached target timers.target - Timers. Feb 14 00:50:57.770667 systemd[1613]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 14 00:50:57.787062 systemd[1613]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 14 00:50:57.787268 systemd[1613]: Reached target sockets.target - Sockets. Feb 14 00:50:57.787295 systemd[1613]: Reached target basic.target - Basic System. Feb 14 00:50:57.787367 systemd[1613]: Reached target default.target - Main User Target. Feb 14 00:50:57.787466 systemd[1613]: Startup finished in 208ms. Feb 14 00:50:57.787712 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 14 00:50:57.799790 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 14 00:50:57.801240 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 14 00:50:57.802583 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 14 00:50:58.201124 coreos-metadata[1474]: Feb 14 00:50:58.200 WARN failed to locate config-drive, using the metadata service API instead Feb 14 00:50:58.227624 coreos-metadata[1474]: Feb 14 00:50:58.227 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 14 00:50:58.234261 coreos-metadata[1474]: Feb 14 00:50:58.234 INFO Fetch failed with 404: resource not found Feb 14 00:50:58.234261 coreos-metadata[1474]: Feb 14 00:50:58.234 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 14 00:50:58.234970 coreos-metadata[1474]: Feb 14 00:50:58.234 INFO Fetch successful Feb 14 00:50:58.235073 coreos-metadata[1474]: Feb 14 00:50:58.234 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 14 00:50:58.247746 coreos-metadata[1474]: Feb 14 00:50:58.247 INFO Fetch successful Feb 14 00:50:58.247923 coreos-metadata[1474]: Feb 14 00:50:58.247 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 14 00:50:58.262774 coreos-metadata[1474]: Feb 14 00:50:58.262 INFO Fetch successful Feb 14 00:50:58.262871 coreos-metadata[1474]: Feb 14 00:50:58.262 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 14 00:50:58.320979 coreos-metadata[1474]: Feb 14 00:50:58.320 INFO Fetch successful Feb 14 00:50:58.321149 coreos-metadata[1474]: Feb 14 00:50:58.321 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 14 00:50:58.336565 coreos-metadata[1474]: Feb 14 00:50:58.336 INFO Fetch successful Feb 14 00:50:58.369541 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 14 00:50:58.371660 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 14 00:50:58.445944 systemd[1]: Started sshd@1-10.230.17.110:22-147.75.109.163:35226.service - OpenSSH per-connection server daemon (147.75.109.163:35226). Feb 14 00:50:58.870777 coreos-metadata[1540]: Feb 14 00:50:58.870 WARN failed to locate config-drive, using the metadata service API instead Feb 14 00:50:58.894490 coreos-metadata[1540]: Feb 14 00:50:58.894 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 14 00:50:58.919133 coreos-metadata[1540]: Feb 14 00:50:58.918 INFO Fetch successful Feb 14 00:50:58.919133 coreos-metadata[1540]: Feb 14 00:50:58.919 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 14 00:50:58.947642 coreos-metadata[1540]: Feb 14 00:50:58.947 INFO Fetch successful Feb 14 00:50:58.949450 unknown[1540]: wrote ssh authorized keys file for user: core Feb 14 00:50:58.975967 update-ssh-keys[1661]: Updated "/home/core/.ssh/authorized_keys" Feb 14 00:50:58.976919 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 14 00:50:58.980183 systemd[1]: Finished sshkeys.service. Feb 14 00:50:58.983294 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 14 00:50:58.983831 systemd[1]: Startup finished in 1.542s (kernel) + 15.806s (initrd) + 11.951s (userspace) = 29.299s. Feb 14 00:50:59.329570 sshd[1657]: Accepted publickey for core from 147.75.109.163 port 35226 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:50:59.331744 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:50:59.339678 systemd-logind[1486]: New session 4 of user core. Feb 14 00:50:59.357810 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 14 00:50:59.949796 sshd[1657]: pam_unix(sshd:session): session closed for user core Feb 14 00:50:59.954740 systemd[1]: sshd@1-10.230.17.110:22-147.75.109.163:35226.service: Deactivated successfully. Feb 14 00:50:59.956751 systemd[1]: session-4.scope: Deactivated successfully. Feb 14 00:50:59.957687 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Feb 14 00:50:59.959284 systemd-logind[1486]: Removed session 4. Feb 14 00:51:00.113855 systemd[1]: Started sshd@2-10.230.17.110:22-147.75.109.163:47558.service - OpenSSH per-connection server daemon (147.75.109.163:47558). Feb 14 00:51:00.988987 sshd[1670]: Accepted publickey for core from 147.75.109.163 port 47558 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:51:00.991063 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:51:00.997876 systemd-logind[1486]: New session 5 of user core. Feb 14 00:51:01.008752 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 14 00:51:01.601617 sshd[1670]: pam_unix(sshd:session): session closed for user core Feb 14 00:51:01.606013 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Feb 14 00:51:01.608049 systemd[1]: sshd@2-10.230.17.110:22-147.75.109.163:47558.service: Deactivated successfully. Feb 14 00:51:01.610512 systemd[1]: session-5.scope: Deactivated successfully. Feb 14 00:51:01.611879 systemd-logind[1486]: Removed session 5. Feb 14 00:51:01.753293 systemd[1]: Started sshd@3-10.230.17.110:22-147.75.109.163:47572.service - OpenSSH per-connection server daemon (147.75.109.163:47572). Feb 14 00:51:02.649494 sshd[1677]: Accepted publickey for core from 147.75.109.163 port 47572 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:51:02.651610 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:51:02.658020 systemd-logind[1486]: New session 6 of user core. Feb 14 00:51:02.670693 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 14 00:51:03.268240 sshd[1677]: pam_unix(sshd:session): session closed for user core Feb 14 00:51:03.272749 systemd[1]: sshd@3-10.230.17.110:22-147.75.109.163:47572.service: Deactivated successfully. Feb 14 00:51:03.274760 systemd[1]: session-6.scope: Deactivated successfully. Feb 14 00:51:03.276766 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Feb 14 00:51:03.278483 systemd-logind[1486]: Removed session 6. Feb 14 00:51:03.429801 systemd[1]: Started sshd@4-10.230.17.110:22-147.75.109.163:47586.service - OpenSSH per-connection server daemon (147.75.109.163:47586). Feb 14 00:51:03.801893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 14 00:51:03.815746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:51:03.975864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:51:03.986943 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:51:04.056315 kubelet[1694]: E0214 00:51:04.056036 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:51:04.060066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:51:04.060329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:51:04.324513 sshd[1684]: Accepted publickey for core from 147.75.109.163 port 47586 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:51:04.326537 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:51:04.334416 systemd-logind[1486]: New session 7 of user core. Feb 14 00:51:04.348729 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 14 00:51:04.812815 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 14 00:51:04.813368 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:51:04.830348 sudo[1702]: pam_unix(sudo:session): session closed for user root Feb 14 00:51:04.983433 sshd[1684]: pam_unix(sshd:session): session closed for user core Feb 14 00:51:04.989804 systemd[1]: sshd@4-10.230.17.110:22-147.75.109.163:47586.service: Deactivated successfully. Feb 14 00:51:04.992333 systemd[1]: session-7.scope: Deactivated successfully. Feb 14 00:51:04.993776 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Feb 14 00:51:04.995614 systemd-logind[1486]: Removed session 7. Feb 14 00:51:05.150411 systemd[1]: Started sshd@5-10.230.17.110:22-147.75.109.163:47594.service - OpenSSH per-connection server daemon (147.75.109.163:47594). Feb 14 00:51:06.030967 sshd[1707]: Accepted publickey for core from 147.75.109.163 port 47594 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:51:06.033278 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:51:06.040280 systemd-logind[1486]: New session 8 of user core. Feb 14 00:51:06.050619 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 14 00:51:06.506864 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 14 00:51:06.507356 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:51:06.512853 sudo[1711]: pam_unix(sudo:session): session closed for user root Feb 14 00:51:06.520987 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 14 00:51:06.522034 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:51:06.541285 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 14 00:51:06.545478 auditctl[1714]: No rules Feb 14 00:51:06.545937 systemd[1]: audit-rules.service: Deactivated successfully. Feb 14 00:51:06.546215 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 14 00:51:06.564269 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 00:51:06.598059 augenrules[1732]: No rules Feb 14 00:51:06.599015 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 00:51:06.600991 sudo[1710]: pam_unix(sudo:session): session closed for user root Feb 14 00:51:06.744887 sshd[1707]: pam_unix(sshd:session): session closed for user core Feb 14 00:51:06.750200 systemd[1]: sshd@5-10.230.17.110:22-147.75.109.163:47594.service: Deactivated successfully. Feb 14 00:51:06.752943 systemd[1]: session-8.scope: Deactivated successfully. Feb 14 00:51:06.755363 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Feb 14 00:51:06.757112 systemd-logind[1486]: Removed session 8. Feb 14 00:51:06.908823 systemd[1]: Started sshd@6-10.230.17.110:22-147.75.109.163:47610.service - OpenSSH per-connection server daemon (147.75.109.163:47610). Feb 14 00:51:07.791502 sshd[1740]: Accepted publickey for core from 147.75.109.163 port 47610 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:51:07.793713 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:51:07.800670 systemd-logind[1486]: New session 9 of user core. Feb 14 00:51:07.811662 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 14 00:51:08.269748 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 14 00:51:08.270290 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:51:08.751738 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 14 00:51:08.753525 (dockerd)[1759]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 14 00:51:09.218456 dockerd[1759]: time="2025-02-14T00:51:09.217848673Z" level=info msg="Starting up" Feb 14 00:51:09.353763 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1489415164-merged.mount: Deactivated successfully. Feb 14 00:51:09.366219 systemd[1]: var-lib-docker-metacopy\x2dcheck1939157827-merged.mount: Deactivated successfully. Feb 14 00:51:09.398438 dockerd[1759]: time="2025-02-14T00:51:09.398317850Z" level=info msg="Loading containers: start." Feb 14 00:51:09.543422 kernel: Initializing XFRM netlink socket Feb 14 00:51:09.662636 systemd-networkd[1424]: docker0: Link UP Feb 14 00:51:09.691755 dockerd[1759]: time="2025-02-14T00:51:09.691667549Z" level=info msg="Loading containers: done." Feb 14 00:51:09.716161 dockerd[1759]: time="2025-02-14T00:51:09.715142800Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 14 00:51:09.716161 dockerd[1759]: time="2025-02-14T00:51:09.715426808Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 14 00:51:09.716161 dockerd[1759]: time="2025-02-14T00:51:09.715654290Z" level=info msg="Daemon has completed initialization" Feb 14 00:51:09.758599 dockerd[1759]: time="2025-02-14T00:51:09.758368507Z" level=info msg="API listen on /run/docker.sock" Feb 14 00:51:09.759012 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 14 00:51:10.350349 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2250471410-merged.mount: Deactivated successfully. Feb 14 00:51:10.960145 containerd[1509]: time="2025-02-14T00:51:10.959531422Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 14 00:51:11.774695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount156863664.mount: Deactivated successfully. Feb 14 00:51:13.468555 containerd[1509]: time="2025-02-14T00:51:13.468486717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:13.469789 containerd[1509]: time="2025-02-14T00:51:13.469705909Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976596" Feb 14 00:51:13.473750 containerd[1509]: time="2025-02-14T00:51:13.473697351Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:13.478485 containerd[1509]: time="2025-02-14T00:51:13.478441309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:13.480452 containerd[1509]: time="2025-02-14T00:51:13.480407884Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 2.520755445s" Feb 14 00:51:13.480539 containerd[1509]: time="2025-02-14T00:51:13.480476315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 14 00:51:13.482772 containerd[1509]: time="2025-02-14T00:51:13.482738981Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 14 00:51:14.301847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 14 00:51:14.309815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:51:14.486606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:51:14.499340 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:51:14.583659 kubelet[1965]: E0214 00:51:14.580710 1965 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:51:14.584979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:51:14.585269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:51:15.803460 containerd[1509]: time="2025-02-14T00:51:15.802921288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:15.804565 containerd[1509]: time="2025-02-14T00:51:15.804505006Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708201" Feb 14 00:51:15.805521 containerd[1509]: time="2025-02-14T00:51:15.805442997Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:15.809410 containerd[1509]: time="2025-02-14T00:51:15.809347528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:15.811225 containerd[1509]: time="2025-02-14T00:51:15.811021021Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 2.328144045s" Feb 14 00:51:15.811225 containerd[1509]: time="2025-02-14T00:51:15.811083868Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 14 00:51:15.812040 containerd[1509]: time="2025-02-14T00:51:15.811989455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 14 00:51:17.960457 containerd[1509]: time="2025-02-14T00:51:17.960361535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:17.961986 containerd[1509]: time="2025-02-14T00:51:17.961919343Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652433" Feb 14 00:51:17.962638 containerd[1509]: time="2025-02-14T00:51:17.962577505Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:17.966690 containerd[1509]: time="2025-02-14T00:51:17.966594825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:17.969261 containerd[1509]: time="2025-02-14T00:51:17.968896291Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 2.156694043s" Feb 14 00:51:17.969261 containerd[1509]: time="2025-02-14T00:51:17.968952577Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 14 00:51:17.972946 containerd[1509]: time="2025-02-14T00:51:17.972693223Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 14 00:51:19.593606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3529492024.mount: Deactivated successfully. Feb 14 00:51:20.481714 containerd[1509]: time="2025-02-14T00:51:20.481620801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:20.483005 containerd[1509]: time="2025-02-14T00:51:20.482754052Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229116" Feb 14 00:51:20.483760 containerd[1509]: time="2025-02-14T00:51:20.483716595Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:20.486593 containerd[1509]: time="2025-02-14T00:51:20.486552997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:20.487743 containerd[1509]: time="2025-02-14T00:51:20.487702689Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.514960602s" Feb 14 00:51:20.487893 containerd[1509]: time="2025-02-14T00:51:20.487862079Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 14 00:51:20.488820 containerd[1509]: time="2025-02-14T00:51:20.488737025Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 14 00:51:21.111563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2626175366.mount: Deactivated successfully. Feb 14 00:51:22.221597 containerd[1509]: time="2025-02-14T00:51:22.221498885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:22.223246 containerd[1509]: time="2025-02-14T00:51:22.223179959Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Feb 14 00:51:22.227407 containerd[1509]: time="2025-02-14T00:51:22.227145386Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:22.231670 containerd[1509]: time="2025-02-14T00:51:22.231626563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:22.233506 containerd[1509]: time="2025-02-14T00:51:22.233264680Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.744472771s" Feb 14 00:51:22.233506 containerd[1509]: time="2025-02-14T00:51:22.233319779Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 14 00:51:22.235133 containerd[1509]: time="2025-02-14T00:51:22.234865495Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 14 00:51:22.615827 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 14 00:51:22.885524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2711076840.mount: Deactivated successfully. Feb 14 00:51:22.903414 containerd[1509]: time="2025-02-14T00:51:22.903329736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:22.905113 containerd[1509]: time="2025-02-14T00:51:22.905047548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Feb 14 00:51:22.906463 containerd[1509]: time="2025-02-14T00:51:22.906194853Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:22.909003 containerd[1509]: time="2025-02-14T00:51:22.908917402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:22.911117 containerd[1509]: time="2025-02-14T00:51:22.910194806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 675.284478ms" Feb 14 00:51:22.911117 containerd[1509]: time="2025-02-14T00:51:22.910245453Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 14 00:51:22.911603 containerd[1509]: time="2025-02-14T00:51:22.911572833Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 14 00:51:23.559285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103819569.mount: Deactivated successfully. Feb 14 00:51:24.802051 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 14 00:51:24.810747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:51:24.968615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:51:24.983261 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:51:25.059932 kubelet[2097]: E0214 00:51:25.059727 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:51:25.062336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:51:25.062750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:51:26.981096 containerd[1509]: time="2025-02-14T00:51:26.980833386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:26.982670 containerd[1509]: time="2025-02-14T00:51:26.982612302Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Feb 14 00:51:26.983414 containerd[1509]: time="2025-02-14T00:51:26.983322838Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:26.987685 containerd[1509]: time="2025-02-14T00:51:26.987644367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:26.989809 containerd[1509]: time="2025-02-14T00:51:26.989581078Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.077866722s" Feb 14 00:51:26.989809 containerd[1509]: time="2025-02-14T00:51:26.989630523Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 14 00:51:31.317768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:51:31.332142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:51:31.383285 systemd[1]: Reloading requested from client PID 2136 ('systemctl') (unit session-9.scope)... Feb 14 00:51:31.383314 systemd[1]: Reloading... Feb 14 00:51:31.569444 zram_generator::config[2171]: No configuration found. Feb 14 00:51:31.753759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:51:31.866409 systemd[1]: Reloading finished in 482 ms. Feb 14 00:51:31.934855 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 14 00:51:31.935003 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 14 00:51:31.935510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:51:31.940924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:51:32.099036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:51:32.114934 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 00:51:32.180682 kubelet[2243]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:51:32.180682 kubelet[2243]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 00:51:32.180682 kubelet[2243]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:51:32.181326 kubelet[2243]: I0214 00:51:32.180842 2243 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 00:51:32.975317 kubelet[2243]: I0214 00:51:32.975234 2243 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 14 00:51:32.975317 kubelet[2243]: I0214 00:51:32.975296 2243 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 00:51:32.975736 kubelet[2243]: I0214 00:51:32.975701 2243 server.go:929] "Client rotation is on, will bootstrap in background" Feb 14 00:51:33.031288 kubelet[2243]: E0214 00:51:33.031189 2243 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.17.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.17.110:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:51:33.034911 kubelet[2243]: I0214 00:51:33.034855 2243 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 00:51:33.046938 kubelet[2243]: E0214 00:51:33.046863 2243 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 14 00:51:33.046938 kubelet[2243]: I0214 00:51:33.046927 2243 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 14 00:51:33.054940 kubelet[2243]: I0214 00:51:33.054889 2243 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 00:51:33.056577 kubelet[2243]: I0214 00:51:33.056515 2243 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 14 00:51:33.057008 kubelet[2243]: I0214 00:51:33.056937 2243 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 00:51:33.057278 kubelet[2243]: I0214 00:51:33.056997 2243 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-2zttm.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 00:51:33.057562 kubelet[2243]: I0214 00:51:33.057312 2243 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 00:51:33.057562 kubelet[2243]: I0214 00:51:33.057331 2243 container_manager_linux.go:300] "Creating device plugin manager" Feb 14 00:51:33.057562 kubelet[2243]: I0214 00:51:33.057546 2243 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:51:33.060838 kubelet[2243]: I0214 00:51:33.060788 2243 kubelet.go:408] "Attempting to sync node with API server" Feb 14 00:51:33.060838 kubelet[2243]: I0214 00:51:33.060828 2243 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 00:51:33.060972 kubelet[2243]: I0214 00:51:33.060899 2243 kubelet.go:314] "Adding apiserver pod source" Feb 14 00:51:33.060972 kubelet[2243]: I0214 00:51:33.060943 2243 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 00:51:33.071022 kubelet[2243]: W0214 00:51:33.070872 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.17.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2zttm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.17.110:6443: connect: connection refused Feb 14 00:51:33.071022 kubelet[2243]: E0214 00:51:33.070968 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.17.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2zttm.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.17.110:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:51:33.073653 kubelet[2243]: I0214 00:51:33.073619 2243 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 00:51:33.075969 kubelet[2243]: I0214 00:51:33.075710 2243 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 00:51:33.077485 kubelet[2243]: W0214 00:51:33.076534 2243 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 14 00:51:33.077556 kubelet[2243]: I0214 00:51:33.077509 2243 server.go:1269] "Started kubelet" Feb 14 00:51:33.085567 kubelet[2243]: W0214 00:51:33.085252 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.17.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.17.110:6443: connect: connection refused Feb 14 00:51:33.085567 kubelet[2243]: E0214 00:51:33.085356 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.17.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.17.110:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:51:33.093415 kubelet[2243]: I0214 00:51:33.093188 2243 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 00:51:33.095608 kubelet[2243]: I0214 00:51:33.095572 2243 server.go:460] "Adding debug handlers to kubelet server" Feb 14 00:51:33.096906 kubelet[2243]: I0214 00:51:33.096667 2243 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 00:51:33.097021 kubelet[2243]: I0214 00:51:33.096961 2243 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 00:51:33.097371 kubelet[2243]: I0214 00:51:33.097332 2243 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 00:51:33.104493 kubelet[2243]: E0214 00:51:33.099986 2243 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.17.110:6443/api/v1/namespaces/default/events\": dial tcp 10.230.17.110:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-2zttm.gb1.brightbox.com.1823ecd7e3b3c211 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-2zttm.gb1.brightbox.com,UID:srv-2zttm.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-2zttm.gb1.brightbox.com,},FirstTimestamp:2025-02-14 00:51:33.077475857 +0000 UTC m=+0.957107710,LastTimestamp:2025-02-14 00:51:33.077475857 +0000 UTC m=+0.957107710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-2zttm.gb1.brightbox.com,}" Feb 14 00:51:33.106437 kubelet[2243]: I0214 00:51:33.106199 2243 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 14 00:51:33.106776 kubelet[2243]: I0214 00:51:33.106750 2243 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 14 00:51:33.107414 kubelet[2243]: E0214 00:51:33.107213 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-2zttm.gb1.brightbox.com\" not found" Feb 14 00:51:33.110572 kubelet[2243]: E0214 00:51:33.110109 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2zttm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.110:6443: connect: connection refused" interval="200ms" Feb 14 00:51:33.111946 kubelet[2243]: I0214 00:51:33.111617 2243 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 00:51:33.113226 kubelet[2243]: I0214 00:51:33.112871 2243 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 14 00:51:33.113226 kubelet[2243]: I0214 00:51:33.112994 2243 reconciler.go:26] "Reconciler: start to sync state" Feb 14 00:51:33.113996 kubelet[2243]: E0214 00:51:33.113963 2243 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 00:51:33.114356 kubelet[2243]: I0214 00:51:33.114325 2243 factory.go:221] Registration of the containerd container factory successfully Feb 14 00:51:33.114356 kubelet[2243]: I0214 00:51:33.114352 2243 factory.go:221] Registration of the systemd container factory successfully Feb 14 00:51:33.127417 kubelet[2243]: W0214 00:51:33.126172 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.17.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.17.110:6443: connect: connection refused Feb 14 00:51:33.127417 kubelet[2243]: E0214 00:51:33.126249 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.17.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.17.110:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:51:33.132896 kubelet[2243]: I0214 00:51:33.132813 2243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 00:51:33.137136 kubelet[2243]: I0214 00:51:33.137101 2243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 00:51:33.137243 kubelet[2243]: I0214 00:51:33.137150 2243 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 00:51:33.137243 kubelet[2243]: I0214 00:51:33.137203 2243 kubelet.go:2321] "Starting kubelet main sync loop" Feb 14 00:51:33.137337 kubelet[2243]: E0214 00:51:33.137270 2243 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 00:51:33.139226 kubelet[2243]: W0214 00:51:33.139188 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.17.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.17.110:6443: connect: connection refused Feb 14 00:51:33.139890 kubelet[2243]: E0214 00:51:33.139847 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.17.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.17.110:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:51:33.151713 kubelet[2243]: I0214 00:51:33.151658 2243 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 14 00:51:33.151713 kubelet[2243]: I0214 00:51:33.151694 2243 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 14 00:51:33.151933 kubelet[2243]: I0214 00:51:33.151740 2243 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:51:33.153837 kubelet[2243]: I0214 00:51:33.153807 2243 policy_none.go:49] "None policy: Start" Feb 14 00:51:33.154869 kubelet[2243]: I0214 00:51:33.154843 2243 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 00:51:33.154974 kubelet[2243]: I0214 00:51:33.154940 2243 state_mem.go:35] "Initializing new in-memory state store" Feb 14 00:51:33.165553 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 14 00:51:33.189023 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 14 00:51:33.195114 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 14 00:51:33.206425 kubelet[2243]: I0214 00:51:33.206192 2243 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 00:51:33.207815 kubelet[2243]: I0214 00:51:33.206613 2243 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 00:51:33.207815 kubelet[2243]: I0214 00:51:33.206641 2243 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 00:51:33.208110 kubelet[2243]: I0214 00:51:33.208038 2243 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 00:51:33.211958 kubelet[2243]: E0214 00:51:33.211883 2243 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-2zttm.gb1.brightbox.com\" not found" Feb 14 00:51:33.258436 systemd[1]: Created slice kubepods-burstable-podd47e681e204ae903a6b4ba87c9f29044.slice - libcontainer container kubepods-burstable-podd47e681e204ae903a6b4ba87c9f29044.slice. Feb 14 00:51:33.282852 systemd[1]: Created slice kubepods-burstable-pod437d5f12108ecd63012bf56e9f758f2a.slice - libcontainer container kubepods-burstable-pod437d5f12108ecd63012bf56e9f758f2a.slice. Feb 14 00:51:33.290143 systemd[1]: Created slice kubepods-burstable-pode645571fbfd80d5392c3ed1e38f1c69b.slice - libcontainer container kubepods-burstable-pode645571fbfd80d5392c3ed1e38f1c69b.slice. Feb 14 00:51:33.310768 kubelet[2243]: E0214 00:51:33.310708 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2zttm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.110:6443: connect: connection refused" interval="400ms" Feb 14 00:51:33.311429 kubelet[2243]: I0214 00:51:33.311315 2243 kubelet_node_status.go:72] "Attempting to register node" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.312047 kubelet[2243]: E0214 00:51:33.311994 2243 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.17.110:6443/api/v1/nodes\": dial tcp 10.230.17.110:6443: connect: connection refused" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.314506 kubelet[2243]: I0214 00:51:33.314392 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d47e681e204ae903a6b4ba87c9f29044-k8s-certs\") pod \"kube-apiserver-srv-2zttm.gb1.brightbox.com\" (UID: \"d47e681e204ae903a6b4ba87c9f29044\") " pod="kube-system/kube-apiserver-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.314506 kubelet[2243]: I0214 00:51:33.314460 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d47e681e204ae903a6b4ba87c9f29044-usr-share-ca-certificates\") pod \"kube-apiserver-srv-2zttm.gb1.brightbox.com\" (UID: \"d47e681e204ae903a6b4ba87c9f29044\") " pod="kube-system/kube-apiserver-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.314645 kubelet[2243]: I0214 00:51:33.314526 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/437d5f12108ecd63012bf56e9f758f2a-ca-certs\") pod \"kube-controller-manager-srv-2zttm.gb1.brightbox.com\" (UID: \"437d5f12108ecd63012bf56e9f758f2a\") " pod="kube-system/kube-controller-manager-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.314645 kubelet[2243]: I0214 00:51:33.314558 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/437d5f12108ecd63012bf56e9f758f2a-flexvolume-dir\") pod \"kube-controller-manager-srv-2zttm.gb1.brightbox.com\" (UID: \"437d5f12108ecd63012bf56e9f758f2a\") " pod="kube-system/kube-controller-manager-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.314645 kubelet[2243]: I0214 00:51:33.314608 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/437d5f12108ecd63012bf56e9f758f2a-kubeconfig\") pod \"kube-controller-manager-srv-2zttm.gb1.brightbox.com\" (UID: \"437d5f12108ecd63012bf56e9f758f2a\") " pod="kube-system/kube-controller-manager-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.314816 kubelet[2243]: I0214 00:51:33.314642 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/437d5f12108ecd63012bf56e9f758f2a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-2zttm.gb1.brightbox.com\" (UID: \"437d5f12108ecd63012bf56e9f758f2a\") " pod="kube-system/kube-controller-manager-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.314816 kubelet[2243]: I0214 00:51:33.314689 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d47e681e204ae903a6b4ba87c9f29044-ca-certs\") pod \"kube-apiserver-srv-2zttm.gb1.brightbox.com\" (UID: \"d47e681e204ae903a6b4ba87c9f29044\") " pod="kube-system/kube-apiserver-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.314816 kubelet[2243]: I0214 00:51:33.314720 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/437d5f12108ecd63012bf56e9f758f2a-k8s-certs\") pod \"kube-controller-manager-srv-2zttm.gb1.brightbox.com\" (UID: \"437d5f12108ecd63012bf56e9f758f2a\") " pod="kube-system/kube-controller-manager-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.314816 kubelet[2243]: I0214 00:51:33.314766 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e645571fbfd80d5392c3ed1e38f1c69b-kubeconfig\") pod \"kube-scheduler-srv-2zttm.gb1.brightbox.com\" (UID: \"e645571fbfd80d5392c3ed1e38f1c69b\") " pod="kube-system/kube-scheduler-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.516813 kubelet[2243]: I0214 00:51:33.516583 2243 kubelet_node_status.go:72] "Attempting to register node" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.517350 kubelet[2243]: E0214 00:51:33.517310 2243 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.17.110:6443/api/v1/nodes\": dial tcp 10.230.17.110:6443: connect: connection refused" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.582314 containerd[1509]: time="2025-02-14T00:51:33.581828977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-2zttm.gb1.brightbox.com,Uid:d47e681e204ae903a6b4ba87c9f29044,Namespace:kube-system,Attempt:0,}" Feb 14 00:51:33.595148 containerd[1509]: time="2025-02-14T00:51:33.594865949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-2zttm.gb1.brightbox.com,Uid:437d5f12108ecd63012bf56e9f758f2a,Namespace:kube-system,Attempt:0,}" Feb 14 00:51:33.596186 containerd[1509]: time="2025-02-14T00:51:33.596128980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-2zttm.gb1.brightbox.com,Uid:e645571fbfd80d5392c3ed1e38f1c69b,Namespace:kube-system,Attempt:0,}" Feb 14 00:51:33.712209 kubelet[2243]: E0214 00:51:33.712121 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2zttm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.110:6443: connect: connection refused" interval="800ms" Feb 14 00:51:33.908353 kubelet[2243]: W0214 00:51:33.908197 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.17.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.17.110:6443: connect: connection refused Feb 14 00:51:33.908353 kubelet[2243]: E0214 00:51:33.908293 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.17.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.17.110:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:51:33.922002 kubelet[2243]: I0214 00:51:33.921924 2243 kubelet_node_status.go:72] "Attempting to register node" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:33.923654 kubelet[2243]: E0214 00:51:33.923191 2243 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.17.110:6443/api/v1/nodes\": dial tcp 10.230.17.110:6443: connect: connection refused" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:34.203932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount926922963.mount: Deactivated successfully. Feb 14 00:51:34.212021 containerd[1509]: time="2025-02-14T00:51:34.211856041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:51:34.215058 containerd[1509]: time="2025-02-14T00:51:34.214958260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 00:51:34.215968 containerd[1509]: time="2025-02-14T00:51:34.215890973Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:51:34.217243 containerd[1509]: time="2025-02-14T00:51:34.217182740Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:51:34.218668 containerd[1509]: time="2025-02-14T00:51:34.218520563Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 14 00:51:34.219695 containerd[1509]: time="2025-02-14T00:51:34.219612660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 00:51:34.219998 containerd[1509]: time="2025-02-14T00:51:34.219930140Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:51:34.224487 containerd[1509]: time="2025-02-14T00:51:34.224416662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:51:34.239401 containerd[1509]: time="2025-02-14T00:51:34.234091668Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 639.107361ms" Feb 14 00:51:34.239401 containerd[1509]: time="2025-02-14T00:51:34.237906764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 641.68924ms" Feb 14 00:51:34.239401 containerd[1509]: time="2025-02-14T00:51:34.239908010Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 657.955867ms" Feb 14 00:51:34.313079 kubelet[2243]: W0214 00:51:34.312968 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.17.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.17.110:6443: connect: connection refused Feb 14 00:51:34.314521 kubelet[2243]: E0214 00:51:34.314119 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.17.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.17.110:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:51:34.469684 containerd[1509]: time="2025-02-14T00:51:34.468663655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:51:34.471470 containerd[1509]: time="2025-02-14T00:51:34.471115448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:51:34.471470 containerd[1509]: time="2025-02-14T00:51:34.471147021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:34.471470 containerd[1509]: time="2025-02-14T00:51:34.471278462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:34.482505 containerd[1509]: time="2025-02-14T00:51:34.482228999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:51:34.482505 containerd[1509]: time="2025-02-14T00:51:34.482317259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:51:34.482853 containerd[1509]: time="2025-02-14T00:51:34.482558107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:34.483185 containerd[1509]: time="2025-02-14T00:51:34.483126886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:34.489859 containerd[1509]: time="2025-02-14T00:51:34.488510207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:51:34.489859 containerd[1509]: time="2025-02-14T00:51:34.488582714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:51:34.489859 containerd[1509]: time="2025-02-14T00:51:34.488606374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:34.489859 containerd[1509]: time="2025-02-14T00:51:34.488724279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:34.511800 kubelet[2243]: W0214 00:51:34.511547 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.17.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.17.110:6443: connect: connection refused Feb 14 00:51:34.511800 kubelet[2243]: E0214 00:51:34.511655 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.17.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.17.110:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:51:34.515684 kubelet[2243]: E0214 00:51:34.515592 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2zttm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.110:6443: connect: connection refused" interval="1.6s" Feb 14 00:51:34.526834 systemd[1]: Started cri-containerd-b3ddff298c7973133c1f0b8a36d96bcf6780e6e04144be7193ef9bc85267132b.scope - libcontainer container b3ddff298c7973133c1f0b8a36d96bcf6780e6e04144be7193ef9bc85267132b. Feb 14 00:51:34.538566 systemd[1]: Started cri-containerd-d25132e80c4103c8ed104738c1718e58580a19b7eb1f96cbae686c77b903b3bf.scope - libcontainer container d25132e80c4103c8ed104738c1718e58580a19b7eb1f96cbae686c77b903b3bf. Feb 14 00:51:34.546739 systemd[1]: Started cri-containerd-28955eabbf80bf8f3e7fb50f3c653e446ccfa362aae8c6ef831d4eeb35e403fc.scope - libcontainer container 28955eabbf80bf8f3e7fb50f3c653e446ccfa362aae8c6ef831d4eeb35e403fc. Feb 14 00:51:34.560824 kubelet[2243]: W0214 00:51:34.560185 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.17.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2zttm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.17.110:6443: connect: connection refused Feb 14 00:51:34.560824 kubelet[2243]: E0214 00:51:34.560769 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.17.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2zttm.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.17.110:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:51:34.636320 containerd[1509]: time="2025-02-14T00:51:34.636249281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-2zttm.gb1.brightbox.com,Uid:437d5f12108ecd63012bf56e9f758f2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3ddff298c7973133c1f0b8a36d96bcf6780e6e04144be7193ef9bc85267132b\"" Feb 14 00:51:34.652537 containerd[1509]: time="2025-02-14T00:51:34.651990887Z" level=info msg="CreateContainer within sandbox \"b3ddff298c7973133c1f0b8a36d96bcf6780e6e04144be7193ef9bc85267132b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 14 00:51:34.673733 containerd[1509]: time="2025-02-14T00:51:34.673671342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-2zttm.gb1.brightbox.com,Uid:e645571fbfd80d5392c3ed1e38f1c69b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d25132e80c4103c8ed104738c1718e58580a19b7eb1f96cbae686c77b903b3bf\"" Feb 14 00:51:34.677491 containerd[1509]: time="2025-02-14T00:51:34.676509456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-2zttm.gb1.brightbox.com,Uid:d47e681e204ae903a6b4ba87c9f29044,Namespace:kube-system,Attempt:0,} returns sandbox id \"28955eabbf80bf8f3e7fb50f3c653e446ccfa362aae8c6ef831d4eeb35e403fc\"" Feb 14 00:51:34.680518 containerd[1509]: time="2025-02-14T00:51:34.680478818Z" level=info msg="CreateContainer within sandbox \"d25132e80c4103c8ed104738c1718e58580a19b7eb1f96cbae686c77b903b3bf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 14 00:51:34.683803 containerd[1509]: time="2025-02-14T00:51:34.683681023Z" level=info msg="CreateContainer within sandbox \"28955eabbf80bf8f3e7fb50f3c653e446ccfa362aae8c6ef831d4eeb35e403fc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 14 00:51:34.688651 containerd[1509]: time="2025-02-14T00:51:34.688601181Z" level=info msg="CreateContainer within sandbox \"b3ddff298c7973133c1f0b8a36d96bcf6780e6e04144be7193ef9bc85267132b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"64ad9d0cbb520201fbcfe1c743ee35ec05c838da3f9b7168ed865f34fac7b94d\"" Feb 14 00:51:34.690218 containerd[1509]: time="2025-02-14T00:51:34.690133723Z" level=info msg="StartContainer for \"64ad9d0cbb520201fbcfe1c743ee35ec05c838da3f9b7168ed865f34fac7b94d\"" Feb 14 00:51:34.728988 kubelet[2243]: I0214 00:51:34.728273 2243 kubelet_node_status.go:72] "Attempting to register node" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:34.728988 kubelet[2243]: E0214 00:51:34.728828 2243 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.17.110:6443/api/v1/nodes\": dial tcp 10.230.17.110:6443: connect: connection refused" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:34.736331 containerd[1509]: time="2025-02-14T00:51:34.736272494Z" level=info msg="CreateContainer within sandbox \"28955eabbf80bf8f3e7fb50f3c653e446ccfa362aae8c6ef831d4eeb35e403fc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"68512d1e03554d8d5a9e97b4370aa62b9ce3f5c9e6bb0737fff879a1b28aecc8\"" Feb 14 00:51:34.738351 containerd[1509]: time="2025-02-14T00:51:34.738318959Z" level=info msg="StartContainer for \"68512d1e03554d8d5a9e97b4370aa62b9ce3f5c9e6bb0737fff879a1b28aecc8\"" Feb 14 00:51:34.741166 containerd[1509]: time="2025-02-14T00:51:34.741018744Z" level=info msg="CreateContainer within sandbox \"d25132e80c4103c8ed104738c1718e58580a19b7eb1f96cbae686c77b903b3bf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b956d09780fde1dc4ca3f853f7fa7731acb6f208bd64bb5ae69dea3adcb947ad\"" Feb 14 00:51:34.742096 containerd[1509]: time="2025-02-14T00:51:34.741726843Z" level=info msg="StartContainer for \"b956d09780fde1dc4ca3f853f7fa7731acb6f208bd64bb5ae69dea3adcb947ad\"" Feb 14 00:51:34.750542 systemd[1]: Started cri-containerd-64ad9d0cbb520201fbcfe1c743ee35ec05c838da3f9b7168ed865f34fac7b94d.scope - libcontainer container 64ad9d0cbb520201fbcfe1c743ee35ec05c838da3f9b7168ed865f34fac7b94d. Feb 14 00:51:34.813709 systemd[1]: Started cri-containerd-68512d1e03554d8d5a9e97b4370aa62b9ce3f5c9e6bb0737fff879a1b28aecc8.scope - libcontainer container 68512d1e03554d8d5a9e97b4370aa62b9ce3f5c9e6bb0737fff879a1b28aecc8. Feb 14 00:51:34.816786 systemd[1]: Started cri-containerd-b956d09780fde1dc4ca3f853f7fa7731acb6f208bd64bb5ae69dea3adcb947ad.scope - libcontainer container b956d09780fde1dc4ca3f853f7fa7731acb6f208bd64bb5ae69dea3adcb947ad. Feb 14 00:51:34.867882 containerd[1509]: time="2025-02-14T00:51:34.867613864Z" level=info msg="StartContainer for \"64ad9d0cbb520201fbcfe1c743ee35ec05c838da3f9b7168ed865f34fac7b94d\" returns successfully" Feb 14 00:51:34.943316 containerd[1509]: time="2025-02-14T00:51:34.943154232Z" level=info msg="StartContainer for \"68512d1e03554d8d5a9e97b4370aa62b9ce3f5c9e6bb0737fff879a1b28aecc8\" returns successfully" Feb 14 00:51:34.956004 containerd[1509]: time="2025-02-14T00:51:34.955788036Z" level=info msg="StartContainer for \"b956d09780fde1dc4ca3f853f7fa7731acb6f208bd64bb5ae69dea3adcb947ad\" returns successfully" Feb 14 00:51:35.179985 kubelet[2243]: E0214 00:51:35.179918 2243 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.17.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.17.110:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:51:36.334299 kubelet[2243]: I0214 00:51:36.333995 2243 kubelet_node_status.go:72] "Attempting to register node" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:36.687087 update_engine[1493]: I20250214 00:51:36.686560 1493 update_attempter.cc:509] Updating boot flags... Feb 14 00:51:36.786936 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2528) Feb 14 00:51:36.946620 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2532) Feb 14 00:51:37.069440 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2532) Feb 14 00:51:37.854424 kubelet[2243]: E0214 00:51:37.854248 2243 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-2zttm.gb1.brightbox.com\" not found" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:37.973794 kubelet[2243]: I0214 00:51:37.973731 2243 kubelet_node_status.go:75] "Successfully registered node" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:38.086157 kubelet[2243]: I0214 00:51:38.086033 2243 apiserver.go:52] "Watching apiserver" Feb 14 00:51:38.114296 kubelet[2243]: I0214 00:51:38.113886 2243 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 14 00:51:40.134753 systemd[1]: Reloading requested from client PID 2537 ('systemctl') (unit session-9.scope)... Feb 14 00:51:40.134796 systemd[1]: Reloading... Feb 14 00:51:40.319614 zram_generator::config[2580]: No configuration found. Feb 14 00:51:40.522645 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:51:40.659234 systemd[1]: Reloading finished in 523 ms. Feb 14 00:51:40.723091 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:51:40.737138 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 00:51:40.737572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:51:40.737718 systemd[1]: kubelet.service: Consumed 1.568s CPU time, 117.9M memory peak, 0B memory swap peak. Feb 14 00:51:40.743828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:51:40.983640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:51:40.995890 (kubelet)[2641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 00:51:41.094015 kubelet[2641]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:51:41.096435 kubelet[2641]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 00:51:41.096435 kubelet[2641]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:51:41.096435 kubelet[2641]: I0214 00:51:41.094736 2641 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 00:51:41.104736 kubelet[2641]: I0214 00:51:41.104691 2641 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 14 00:51:41.104936 kubelet[2641]: I0214 00:51:41.104916 2641 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 00:51:41.105512 kubelet[2641]: I0214 00:51:41.105487 2641 server.go:929] "Client rotation is on, will bootstrap in background" Feb 14 00:51:41.107791 kubelet[2641]: I0214 00:51:41.107765 2641 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 00:51:41.118443 kubelet[2641]: I0214 00:51:41.118362 2641 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 00:51:41.124772 kubelet[2641]: E0214 00:51:41.124722 2641 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 14 00:51:41.125025 kubelet[2641]: I0214 00:51:41.124990 2641 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 14 00:51:41.130405 kubelet[2641]: I0214 00:51:41.130363 2641 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 00:51:41.132786 kubelet[2641]: I0214 00:51:41.131824 2641 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 14 00:51:41.132786 kubelet[2641]: I0214 00:51:41.132125 2641 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 00:51:41.132786 kubelet[2641]: I0214 00:51:41.132182 2641 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-2zttm.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 00:51:41.132786 kubelet[2641]: I0214 00:51:41.132518 2641 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 00:51:41.133273 kubelet[2641]: I0214 00:51:41.132535 2641 container_manager_linux.go:300] "Creating device plugin manager" Feb 14 00:51:41.133273 kubelet[2641]: I0214 00:51:41.132613 2641 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:51:41.135990 kubelet[2641]: I0214 00:51:41.135959 2641 kubelet.go:408] "Attempting to sync node with API server" Feb 14 00:51:41.137149 kubelet[2641]: I0214 00:51:41.137126 2641 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 00:51:41.137333 kubelet[2641]: I0214 00:51:41.137311 2641 kubelet.go:314] "Adding apiserver pod source" Feb 14 00:51:41.137471 kubelet[2641]: I0214 00:51:41.137451 2641 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 00:51:41.140036 kubelet[2641]: I0214 00:51:41.140001 2641 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 00:51:41.141084 kubelet[2641]: I0214 00:51:41.141042 2641 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 00:51:41.160911 kubelet[2641]: I0214 00:51:41.160563 2641 server.go:1269] "Started kubelet" Feb 14 00:51:41.165000 kubelet[2641]: I0214 00:51:41.164823 2641 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 00:51:41.168239 kubelet[2641]: I0214 00:51:41.168179 2641 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 00:51:41.178109 kubelet[2641]: I0214 00:51:41.177008 2641 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 00:51:41.178109 kubelet[2641]: I0214 00:51:41.177584 2641 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 00:51:41.181324 kubelet[2641]: I0214 00:51:41.181293 2641 server.go:460] "Adding debug handlers to kubelet server" Feb 14 00:51:41.184098 kubelet[2641]: I0214 00:51:41.184063 2641 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 14 00:51:41.187103 kubelet[2641]: I0214 00:51:41.187063 2641 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 14 00:51:41.195240 kubelet[2641]: I0214 00:51:41.190508 2641 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 14 00:51:41.196625 kubelet[2641]: I0214 00:51:41.196602 2641 reconciler.go:26] "Reconciler: start to sync state" Feb 14 00:51:41.209557 kubelet[2641]: I0214 00:51:41.209206 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 00:51:41.211739 kubelet[2641]: E0214 00:51:41.209947 2641 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 00:51:41.212465 kubelet[2641]: I0214 00:51:41.210156 2641 factory.go:221] Registration of the containerd container factory successfully Feb 14 00:51:41.212593 kubelet[2641]: I0214 00:51:41.212574 2641 factory.go:221] Registration of the systemd container factory successfully Feb 14 00:51:41.213870 kubelet[2641]: I0214 00:51:41.212887 2641 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 00:51:41.214347 kubelet[2641]: I0214 00:51:41.214311 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 00:51:41.214718 kubelet[2641]: I0214 00:51:41.214602 2641 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 00:51:41.214881 kubelet[2641]: I0214 00:51:41.214860 2641 kubelet.go:2321] "Starting kubelet main sync loop" Feb 14 00:51:41.215368 kubelet[2641]: E0214 00:51:41.215061 2641 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 00:51:41.315819 kubelet[2641]: E0214 00:51:41.315705 2641 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 14 00:51:41.316726 kubelet[2641]: I0214 00:51:41.316686 2641 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 14 00:51:41.316726 kubelet[2641]: I0214 00:51:41.316716 2641 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 14 00:51:41.316884 kubelet[2641]: I0214 00:51:41.316754 2641 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:51:41.317028 kubelet[2641]: I0214 00:51:41.316988 2641 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 14 00:51:41.317109 kubelet[2641]: I0214 00:51:41.317020 2641 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 14 00:51:41.317109 kubelet[2641]: I0214 00:51:41.317055 2641 policy_none.go:49] "None policy: Start" Feb 14 00:51:41.318766 kubelet[2641]: I0214 00:51:41.318603 2641 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 00:51:41.318766 kubelet[2641]: I0214 00:51:41.318649 2641 state_mem.go:35] "Initializing new in-memory state store" Feb 14 00:51:41.319862 kubelet[2641]: I0214 00:51:41.319150 2641 state_mem.go:75] "Updated machine memory state" Feb 14 00:51:41.331509 kubelet[2641]: I0214 00:51:41.330770 2641 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 00:51:41.333159 kubelet[2641]: I0214 00:51:41.333136 2641 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 00:51:41.334981 kubelet[2641]: I0214 00:51:41.333219 2641 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 00:51:41.334981 kubelet[2641]: I0214 00:51:41.333782 2641 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 00:51:41.457161 kubelet[2641]: I0214 00:51:41.456651 2641 kubelet_node_status.go:72] "Attempting to register node" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:41.468688 kubelet[2641]: I0214 00:51:41.468645 2641 kubelet_node_status.go:111] "Node was previously registered" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:41.470186 kubelet[2641]: I0214 00:51:41.470140 2641 kubelet_node_status.go:75] "Successfully registered node" node="srv-2zttm.gb1.brightbox.com" Feb 14 00:51:41.530761 kubelet[2641]: W0214 00:51:41.530616 2641 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:51:41.533117 kubelet[2641]: W0214 00:51:41.532894 2641 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:51:41.533248 kubelet[2641]: W0214 00:51:41.533207 2641 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:51:41.619822 kubelet[2641]: I0214 00:51:41.617558 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/437d5f12108ecd63012bf56e9f758f2a-ca-certs\") pod \"kube-controller-manager-srv-2zttm.gb1.brightbox.com\" (UID: \"437d5f12108ecd63012bf56e9f758f2a\") " pod="kube-system/kube-controller-manager-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:41.619822 kubelet[2641]: I0214 00:51:41.618215 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/437d5f12108ecd63012bf56e9f758f2a-k8s-certs\") pod \"kube-controller-manager-srv-2zttm.gb1.brightbox.com\" (UID: \"437d5f12108ecd63012bf56e9f758f2a\") " pod="kube-system/kube-controller-manager-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:41.619822 kubelet[2641]: I0214 00:51:41.618267 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/437d5f12108ecd63012bf56e9f758f2a-kubeconfig\") pod \"kube-controller-manager-srv-2zttm.gb1.brightbox.com\" (UID: \"437d5f12108ecd63012bf56e9f758f2a\") " pod="kube-system/kube-controller-manager-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:41.619822 kubelet[2641]: I0214 00:51:41.618301 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d47e681e204ae903a6b4ba87c9f29044-k8s-certs\") pod \"kube-apiserver-srv-2zttm.gb1.brightbox.com\" (UID: \"d47e681e204ae903a6b4ba87c9f29044\") " pod="kube-system/kube-apiserver-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:41.619822 kubelet[2641]: I0214 00:51:41.618338 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d47e681e204ae903a6b4ba87c9f29044-usr-share-ca-certificates\") pod \"kube-apiserver-srv-2zttm.gb1.brightbox.com\" (UID: \"d47e681e204ae903a6b4ba87c9f29044\") " pod="kube-system/kube-apiserver-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:41.620318 kubelet[2641]: I0214 00:51:41.618371 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/437d5f12108ecd63012bf56e9f758f2a-flexvolume-dir\") pod \"kube-controller-manager-srv-2zttm.gb1.brightbox.com\" (UID: \"437d5f12108ecd63012bf56e9f758f2a\") " pod="kube-system/kube-controller-manager-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:41.620318 kubelet[2641]: I0214 00:51:41.619290 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/437d5f12108ecd63012bf56e9f758f2a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-2zttm.gb1.brightbox.com\" (UID: \"437d5f12108ecd63012bf56e9f758f2a\") " pod="kube-system/kube-controller-manager-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:41.620318 kubelet[2641]: I0214 00:51:41.619375 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e645571fbfd80d5392c3ed1e38f1c69b-kubeconfig\") pod \"kube-scheduler-srv-2zttm.gb1.brightbox.com\" (UID: \"e645571fbfd80d5392c3ed1e38f1c69b\") " pod="kube-system/kube-scheduler-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:41.620318 kubelet[2641]: I0214 00:51:41.619438 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d47e681e204ae903a6b4ba87c9f29044-ca-certs\") pod \"kube-apiserver-srv-2zttm.gb1.brightbox.com\" (UID: \"d47e681e204ae903a6b4ba87c9f29044\") " pod="kube-system/kube-apiserver-srv-2zttm.gb1.brightbox.com" Feb 14 00:51:42.139506 kubelet[2641]: I0214 00:51:42.138911 2641 apiserver.go:52] "Watching apiserver" Feb 14 00:51:42.196651 kubelet[2641]: I0214 00:51:42.196419 2641 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 14 00:51:42.339782 kubelet[2641]: I0214 00:51:42.339541 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-2zttm.gb1.brightbox.com" podStartSLOduration=1.339481464 podStartE2EDuration="1.339481464s" podCreationTimestamp="2025-02-14 00:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:51:42.337123543 +0000 UTC m=+1.327290449" watchObservedRunningTime="2025-02-14 00:51:42.339481464 +0000 UTC m=+1.329648345" Feb 14 00:51:42.395571 kubelet[2641]: I0214 00:51:42.394324 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-2zttm.gb1.brightbox.com" podStartSLOduration=1.394295604 podStartE2EDuration="1.394295604s" podCreationTimestamp="2025-02-14 00:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:51:42.39313522 +0000 UTC m=+1.383302102" watchObservedRunningTime="2025-02-14 00:51:42.394295604 +0000 UTC m=+1.384462497" Feb 14 00:51:42.475470 kubelet[2641]: I0214 00:51:42.475119 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-2zttm.gb1.brightbox.com" podStartSLOduration=1.475089406 podStartE2EDuration="1.475089406s" podCreationTimestamp="2025-02-14 00:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:51:42.465691931 +0000 UTC m=+1.455858827" watchObservedRunningTime="2025-02-14 00:51:42.475089406 +0000 UTC m=+1.465256295" Feb 14 00:51:44.813970 kubelet[2641]: I0214 00:51:44.813530 2641 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 14 00:51:44.816694 kubelet[2641]: I0214 00:51:44.815918 2641 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 14 00:51:44.817102 containerd[1509]: time="2025-02-14T00:51:44.815560879Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 14 00:51:45.324518 kubelet[2641]: W0214 00:51:45.324167 2641 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-2zttm.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-2zttm.gb1.brightbox.com' and this object Feb 14 00:51:45.324518 kubelet[2641]: E0214 00:51:45.324271 2641 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:srv-2zttm.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-2zttm.gb1.brightbox.com' and this object" logger="UnhandledError" Feb 14 00:51:45.324518 kubelet[2641]: W0214 00:51:45.324452 2641 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-2zttm.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-2zttm.gb1.brightbox.com' and this object Feb 14 00:51:45.324518 kubelet[2641]: E0214 00:51:45.324480 2641 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-2zttm.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-2zttm.gb1.brightbox.com' and this object" logger="UnhandledError" Feb 14 00:51:45.345560 systemd[1]: Created slice kubepods-besteffort-podfc6cb54a_4e71_4481_8a2d_31f375b3bc5c.slice - libcontainer container kubepods-besteffort-podfc6cb54a_4e71_4481_8a2d_31f375b3bc5c.slice. Feb 14 00:51:45.349164 kubelet[2641]: I0214 00:51:45.347738 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc6cb54a-4e71-4481-8a2d-31f375b3bc5c-kube-proxy\") pod \"kube-proxy-kf5rc\" (UID: \"fc6cb54a-4e71-4481-8a2d-31f375b3bc5c\") " pod="kube-system/kube-proxy-kf5rc" Feb 14 00:51:45.349164 kubelet[2641]: I0214 00:51:45.347802 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc6cb54a-4e71-4481-8a2d-31f375b3bc5c-lib-modules\") pod \"kube-proxy-kf5rc\" (UID: \"fc6cb54a-4e71-4481-8a2d-31f375b3bc5c\") " pod="kube-system/kube-proxy-kf5rc" Feb 14 00:51:45.349164 kubelet[2641]: I0214 00:51:45.347886 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc6cb54a-4e71-4481-8a2d-31f375b3bc5c-xtables-lock\") pod \"kube-proxy-kf5rc\" (UID: \"fc6cb54a-4e71-4481-8a2d-31f375b3bc5c\") " pod="kube-system/kube-proxy-kf5rc" Feb 14 00:51:45.349164 kubelet[2641]: I0214 00:51:45.347923 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnq6\" (UniqueName: \"kubernetes.io/projected/fc6cb54a-4e71-4481-8a2d-31f375b3bc5c-kube-api-access-clnq6\") pod \"kube-proxy-kf5rc\" (UID: \"fc6cb54a-4e71-4481-8a2d-31f375b3bc5c\") " pod="kube-system/kube-proxy-kf5rc" Feb 14 00:51:45.871416 systemd[1]: Created slice kubepods-besteffort-pod80fa17af_9787_43f1_8b7f_d5c0b748390a.slice - libcontainer container kubepods-besteffort-pod80fa17af_9787_43f1_8b7f_d5c0b748390a.slice. Feb 14 00:51:45.952913 kubelet[2641]: I0214 00:51:45.952703 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/80fa17af-9787-43f1-8b7f-d5c0b748390a-var-lib-calico\") pod \"tigera-operator-76c4976dd7-r8btj\" (UID: \"80fa17af-9787-43f1-8b7f-d5c0b748390a\") " pod="tigera-operator/tigera-operator-76c4976dd7-r8btj" Feb 14 00:51:45.952913 kubelet[2641]: I0214 00:51:45.952808 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nh4n\" (UniqueName: \"kubernetes.io/projected/80fa17af-9787-43f1-8b7f-d5c0b748390a-kube-api-access-2nh4n\") pod \"tigera-operator-76c4976dd7-r8btj\" (UID: \"80fa17af-9787-43f1-8b7f-d5c0b748390a\") " pod="tigera-operator/tigera-operator-76c4976dd7-r8btj" Feb 14 00:51:46.178433 containerd[1509]: time="2025-02-14T00:51:46.178154447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-r8btj,Uid:80fa17af-9787-43f1-8b7f-d5c0b748390a,Namespace:tigera-operator,Attempt:0,}" Feb 14 00:51:46.221546 containerd[1509]: time="2025-02-14T00:51:46.220311832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:51:46.221546 containerd[1509]: time="2025-02-14T00:51:46.221433690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:51:46.221546 containerd[1509]: time="2025-02-14T00:51:46.221458032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:46.222095 containerd[1509]: time="2025-02-14T00:51:46.221727136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:46.259668 systemd[1]: Started cri-containerd-9030cf7f7430c025a28149a2f1c2b696e11618cefb3eb9c40d374af1e8ccc47c.scope - libcontainer container 9030cf7f7430c025a28149a2f1c2b696e11618cefb3eb9c40d374af1e8ccc47c. Feb 14 00:51:46.328971 containerd[1509]: time="2025-02-14T00:51:46.328864311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-r8btj,Uid:80fa17af-9787-43f1-8b7f-d5c0b748390a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9030cf7f7430c025a28149a2f1c2b696e11618cefb3eb9c40d374af1e8ccc47c\"" Feb 14 00:51:46.332537 containerd[1509]: time="2025-02-14T00:51:46.332448342Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 14 00:51:46.450436 kubelet[2641]: E0214 00:51:46.450171 2641 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 14 00:51:46.450436 kubelet[2641]: E0214 00:51:46.450373 2641 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc6cb54a-4e71-4481-8a2d-31f375b3bc5c-kube-proxy podName:fc6cb54a-4e71-4481-8a2d-31f375b3bc5c nodeName:}" failed. No retries permitted until 2025-02-14 00:51:46.950323527 +0000 UTC m=+5.940490404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/fc6cb54a-4e71-4481-8a2d-31f375b3bc5c-kube-proxy") pod "kube-proxy-kf5rc" (UID: "fc6cb54a-4e71-4481-8a2d-31f375b3bc5c") : failed to sync configmap cache: timed out waiting for the condition Feb 14 00:51:46.459274 kubelet[2641]: E0214 00:51:46.459137 2641 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 14 00:51:46.459274 kubelet[2641]: E0214 00:51:46.459180 2641 projected.go:194] Error preparing data for projected volume kube-api-access-clnq6 for pod kube-system/kube-proxy-kf5rc: failed to sync configmap cache: timed out waiting for the condition Feb 14 00:51:46.459274 kubelet[2641]: E0214 00:51:46.459269 2641 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc6cb54a-4e71-4481-8a2d-31f375b3bc5c-kube-api-access-clnq6 podName:fc6cb54a-4e71-4481-8a2d-31f375b3bc5c nodeName:}" failed. No retries permitted until 2025-02-14 00:51:46.959234432 +0000 UTC m=+5.949401312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-clnq6" (UniqueName: "kubernetes.io/projected/fc6cb54a-4e71-4481-8a2d-31f375b3bc5c-kube-api-access-clnq6") pod "kube-proxy-kf5rc" (UID: "fc6cb54a-4e71-4481-8a2d-31f375b3bc5c") : failed to sync configmap cache: timed out waiting for the condition Feb 14 00:51:47.157408 containerd[1509]: time="2025-02-14T00:51:47.157251283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kf5rc,Uid:fc6cb54a-4e71-4481-8a2d-31f375b3bc5c,Namespace:kube-system,Attempt:0,}" Feb 14 00:51:47.189463 containerd[1509]: time="2025-02-14T00:51:47.189023346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:51:47.189463 containerd[1509]: time="2025-02-14T00:51:47.189170285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:51:47.189463 containerd[1509]: time="2025-02-14T00:51:47.189197822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:47.189463 containerd[1509]: time="2025-02-14T00:51:47.189453209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:47.213835 systemd[1]: run-containerd-runc-k8s.io-7bf123f74365cf01652e756dc8d6defd4e6cfb57144bbe65b7c47c320bc4b456-runc.gQX20U.mount: Deactivated successfully. Feb 14 00:51:47.226615 systemd[1]: Started cri-containerd-7bf123f74365cf01652e756dc8d6defd4e6cfb57144bbe65b7c47c320bc4b456.scope - libcontainer container 7bf123f74365cf01652e756dc8d6defd4e6cfb57144bbe65b7c47c320bc4b456. Feb 14 00:51:47.269714 containerd[1509]: time="2025-02-14T00:51:47.269654355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kf5rc,Uid:fc6cb54a-4e71-4481-8a2d-31f375b3bc5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bf123f74365cf01652e756dc8d6defd4e6cfb57144bbe65b7c47c320bc4b456\"" Feb 14 00:51:47.276963 containerd[1509]: time="2025-02-14T00:51:47.276921133Z" level=info msg="CreateContainer within sandbox \"7bf123f74365cf01652e756dc8d6defd4e6cfb57144bbe65b7c47c320bc4b456\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 14 00:51:47.298625 containerd[1509]: time="2025-02-14T00:51:47.298553045Z" level=info msg="CreateContainer within sandbox \"7bf123f74365cf01652e756dc8d6defd4e6cfb57144bbe65b7c47c320bc4b456\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1b29bdcac3d8e593be8279e22356cc703ea0a482d58bb160a02a818f765aa8fe\"" Feb 14 00:51:47.300115 containerd[1509]: time="2025-02-14T00:51:47.299737944Z" level=info msg="StartContainer for \"1b29bdcac3d8e593be8279e22356cc703ea0a482d58bb160a02a818f765aa8fe\"" Feb 14 00:51:47.347669 systemd[1]: Started cri-containerd-1b29bdcac3d8e593be8279e22356cc703ea0a482d58bb160a02a818f765aa8fe.scope - libcontainer container 1b29bdcac3d8e593be8279e22356cc703ea0a482d58bb160a02a818f765aa8fe. Feb 14 00:51:47.403226 containerd[1509]: time="2025-02-14T00:51:47.403119563Z" level=info msg="StartContainer for \"1b29bdcac3d8e593be8279e22356cc703ea0a482d58bb160a02a818f765aa8fe\" returns successfully" Feb 14 00:51:47.644439 sudo[1743]: pam_unix(sudo:session): session closed for user root Feb 14 00:51:47.791114 sshd[1740]: pam_unix(sshd:session): session closed for user core Feb 14 00:51:47.797934 systemd[1]: sshd@6-10.230.17.110:22-147.75.109.163:47610.service: Deactivated successfully. Feb 14 00:51:47.802758 systemd[1]: session-9.scope: Deactivated successfully. Feb 14 00:51:47.803151 systemd[1]: session-9.scope: Consumed 6.511s CPU time, 145.8M memory peak, 0B memory swap peak. Feb 14 00:51:47.804308 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Feb 14 00:51:47.808123 systemd-logind[1486]: Removed session 9. Feb 14 00:51:48.370431 kubelet[2641]: I0214 00:51:48.368787 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kf5rc" podStartSLOduration=3.368757612 podStartE2EDuration="3.368757612s" podCreationTimestamp="2025-02-14 00:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:51:48.366150669 +0000 UTC m=+7.356317568" watchObservedRunningTime="2025-02-14 00:51:48.368757612 +0000 UTC m=+7.358924500" Feb 14 00:51:48.546597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount453913150.mount: Deactivated successfully. Feb 14 00:51:49.343468 containerd[1509]: time="2025-02-14T00:51:49.343046289Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:49.344968 containerd[1509]: time="2025-02-14T00:51:49.344911701Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 14 00:51:49.346068 containerd[1509]: time="2025-02-14T00:51:49.346000538Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:49.363243 containerd[1509]: time="2025-02-14T00:51:49.363167421Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:49.365575 containerd[1509]: time="2025-02-14T00:51:49.365307967Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.03277605s" Feb 14 00:51:49.365575 containerd[1509]: time="2025-02-14T00:51:49.365359452Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 14 00:51:49.369030 containerd[1509]: time="2025-02-14T00:51:49.368933765Z" level=info msg="CreateContainer within sandbox \"9030cf7f7430c025a28149a2f1c2b696e11618cefb3eb9c40d374af1e8ccc47c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 14 00:51:49.388141 containerd[1509]: time="2025-02-14T00:51:49.387954822Z" level=info msg="CreateContainer within sandbox \"9030cf7f7430c025a28149a2f1c2b696e11618cefb3eb9c40d374af1e8ccc47c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6b604a4dd6eb3264bd2be07189e1b0adaee7c4b602705357aca5eeb3f8e1e3ff\"" Feb 14 00:51:49.388951 containerd[1509]: time="2025-02-14T00:51:49.388881859Z" level=info msg="StartContainer for \"6b604a4dd6eb3264bd2be07189e1b0adaee7c4b602705357aca5eeb3f8e1e3ff\"" Feb 14 00:51:49.442852 systemd[1]: Started cri-containerd-6b604a4dd6eb3264bd2be07189e1b0adaee7c4b602705357aca5eeb3f8e1e3ff.scope - libcontainer container 6b604a4dd6eb3264bd2be07189e1b0adaee7c4b602705357aca5eeb3f8e1e3ff. Feb 14 00:51:49.490127 containerd[1509]: time="2025-02-14T00:51:49.488495287Z" level=info msg="StartContainer for \"6b604a4dd6eb3264bd2be07189e1b0adaee7c4b602705357aca5eeb3f8e1e3ff\" returns successfully" Feb 14 00:51:50.322156 kubelet[2641]: I0214 00:51:50.321221 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-r8btj" podStartSLOduration=2.285374267 podStartE2EDuration="5.32119657s" podCreationTimestamp="2025-02-14 00:51:45 +0000 UTC" firstStartedPulling="2025-02-14 00:51:46.331168257 +0000 UTC m=+5.321335136" lastFinishedPulling="2025-02-14 00:51:49.366990563 +0000 UTC m=+8.357157439" observedRunningTime="2025-02-14 00:51:50.320883393 +0000 UTC m=+9.311050296" watchObservedRunningTime="2025-02-14 00:51:50.32119657 +0000 UTC m=+9.311363460" Feb 14 00:51:53.007005 systemd[1]: Created slice kubepods-besteffort-pod55b08cea_7654_47c1_9665_600b1dcc7097.slice - libcontainer container kubepods-besteffort-pod55b08cea_7654_47c1_9665_600b1dcc7097.slice. Feb 14 00:51:53.110028 kubelet[2641]: I0214 00:51:53.109923 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5ftc\" (UniqueName: \"kubernetes.io/projected/55b08cea-7654-47c1-9665-600b1dcc7097-kube-api-access-m5ftc\") pod \"calico-typha-6496cfd596-sq7gg\" (UID: \"55b08cea-7654-47c1-9665-600b1dcc7097\") " pod="calico-system/calico-typha-6496cfd596-sq7gg" Feb 14 00:51:53.110028 kubelet[2641]: I0214 00:51:53.110039 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/55b08cea-7654-47c1-9665-600b1dcc7097-typha-certs\") pod \"calico-typha-6496cfd596-sq7gg\" (UID: \"55b08cea-7654-47c1-9665-600b1dcc7097\") " pod="calico-system/calico-typha-6496cfd596-sq7gg" Feb 14 00:51:53.110802 kubelet[2641]: I0214 00:51:53.110081 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55b08cea-7654-47c1-9665-600b1dcc7097-tigera-ca-bundle\") pod \"calico-typha-6496cfd596-sq7gg\" (UID: \"55b08cea-7654-47c1-9665-600b1dcc7097\") " pod="calico-system/calico-typha-6496cfd596-sq7gg" Feb 14 00:51:53.325581 kubelet[2641]: E0214 00:51:53.325510 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rfjl" podUID="63e6501f-0b84-4dd0-abbf-bfa62e42e8b0" Feb 14 00:51:53.353428 systemd[1]: Created slice kubepods-besteffort-podb62675ed_ed64_4af0_8892_ab5151d23e6b.slice - libcontainer container kubepods-besteffort-podb62675ed_ed64_4af0_8892_ab5151d23e6b.slice. Feb 14 00:51:53.414514 kubelet[2641]: I0214 00:51:53.414464 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b62675ed-ed64-4af0-8892-ab5151d23e6b-node-certs\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.414729 kubelet[2641]: I0214 00:51:53.414534 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6dtj\" (UniqueName: \"kubernetes.io/projected/63e6501f-0b84-4dd0-abbf-bfa62e42e8b0-kube-api-access-t6dtj\") pod \"csi-node-driver-6rfjl\" (UID: \"63e6501f-0b84-4dd0-abbf-bfa62e42e8b0\") " pod="calico-system/csi-node-driver-6rfjl" Feb 14 00:51:53.414729 kubelet[2641]: I0214 00:51:53.414585 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b62675ed-ed64-4af0-8892-ab5151d23e6b-var-lib-calico\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.414729 kubelet[2641]: I0214 00:51:53.414616 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/63e6501f-0b84-4dd0-abbf-bfa62e42e8b0-varrun\") pod \"csi-node-driver-6rfjl\" (UID: \"63e6501f-0b84-4dd0-abbf-bfa62e42e8b0\") " pod="calico-system/csi-node-driver-6rfjl" Feb 14 00:51:53.414729 kubelet[2641]: I0214 00:51:53.414646 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b62675ed-ed64-4af0-8892-ab5151d23e6b-cni-log-dir\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.414729 kubelet[2641]: I0214 00:51:53.414677 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b62675ed-ed64-4af0-8892-ab5151d23e6b-xtables-lock\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.415015 kubelet[2641]: I0214 00:51:53.414706 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b62675ed-ed64-4af0-8892-ab5151d23e6b-flexvol-driver-host\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.415015 kubelet[2641]: I0214 00:51:53.414734 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/63e6501f-0b84-4dd0-abbf-bfa62e42e8b0-kubelet-dir\") pod \"csi-node-driver-6rfjl\" (UID: \"63e6501f-0b84-4dd0-abbf-bfa62e42e8b0\") " pod="calico-system/csi-node-driver-6rfjl" Feb 14 00:51:53.415015 kubelet[2641]: I0214 00:51:53.414764 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b62675ed-ed64-4af0-8892-ab5151d23e6b-lib-modules\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.415015 kubelet[2641]: I0214 00:51:53.414791 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/63e6501f-0b84-4dd0-abbf-bfa62e42e8b0-registration-dir\") pod \"csi-node-driver-6rfjl\" (UID: \"63e6501f-0b84-4dd0-abbf-bfa62e42e8b0\") " pod="calico-system/csi-node-driver-6rfjl" Feb 14 00:51:53.415015 kubelet[2641]: I0214 00:51:53.414820 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b62675ed-ed64-4af0-8892-ab5151d23e6b-policysync\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.415250 kubelet[2641]: I0214 00:51:53.414847 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b62675ed-ed64-4af0-8892-ab5151d23e6b-cni-net-dir\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.415250 kubelet[2641]: I0214 00:51:53.414876 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm7zn\" (UniqueName: \"kubernetes.io/projected/b62675ed-ed64-4af0-8892-ab5151d23e6b-kube-api-access-zm7zn\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.415250 kubelet[2641]: I0214 00:51:53.414903 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/63e6501f-0b84-4dd0-abbf-bfa62e42e8b0-socket-dir\") pod \"csi-node-driver-6rfjl\" (UID: \"63e6501f-0b84-4dd0-abbf-bfa62e42e8b0\") " pod="calico-system/csi-node-driver-6rfjl" Feb 14 00:51:53.415250 kubelet[2641]: I0214 00:51:53.414934 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b62675ed-ed64-4af0-8892-ab5151d23e6b-var-run-calico\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.415250 kubelet[2641]: I0214 00:51:53.414960 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b62675ed-ed64-4af0-8892-ab5151d23e6b-tigera-ca-bundle\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.415605 kubelet[2641]: I0214 00:51:53.415006 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b62675ed-ed64-4af0-8892-ab5151d23e6b-cni-bin-dir\") pod \"calico-node-6p8xl\" (UID: \"b62675ed-ed64-4af0-8892-ab5151d23e6b\") " pod="calico-system/calico-node-6p8xl" Feb 14 00:51:53.528617 kubelet[2641]: E0214 00:51:53.528569 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:53.528617 kubelet[2641]: W0214 00:51:53.528613 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:53.528850 kubelet[2641]: E0214 00:51:53.528660 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:53.537300 kubelet[2641]: E0214 00:51:53.536623 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:53.537300 kubelet[2641]: W0214 00:51:53.536654 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:53.537300 kubelet[2641]: E0214 00:51:53.536684 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:53.550574 kubelet[2641]: E0214 00:51:53.550535 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:53.550720 kubelet[2641]: W0214 00:51:53.550588 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:53.550720 kubelet[2641]: E0214 00:51:53.550622 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:53.638200 containerd[1509]: time="2025-02-14T00:51:53.638025710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6496cfd596-sq7gg,Uid:55b08cea-7654-47c1-9665-600b1dcc7097,Namespace:calico-system,Attempt:0,}" Feb 14 00:51:53.661066 containerd[1509]: time="2025-02-14T00:51:53.661006235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6p8xl,Uid:b62675ed-ed64-4af0-8892-ab5151d23e6b,Namespace:calico-system,Attempt:0,}" Feb 14 00:51:53.691762 containerd[1509]: time="2025-02-14T00:51:53.691372629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:51:53.692546 containerd[1509]: time="2025-02-14T00:51:53.691785125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:51:53.692546 containerd[1509]: time="2025-02-14T00:51:53.691862067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:53.692546 containerd[1509]: time="2025-02-14T00:51:53.692011366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:53.738067 containerd[1509]: time="2025-02-14T00:51:53.737649902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:51:53.738067 containerd[1509]: time="2025-02-14T00:51:53.737781404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:51:53.738067 containerd[1509]: time="2025-02-14T00:51:53.737805602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:53.738067 containerd[1509]: time="2025-02-14T00:51:53.737964008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:51:53.738959 systemd[1]: Started cri-containerd-e4f4d3e5cea89430f3949be8a3686e34b6197b2e55a9dcee9174ba213d61d85b.scope - libcontainer container e4f4d3e5cea89430f3949be8a3686e34b6197b2e55a9dcee9174ba213d61d85b. Feb 14 00:51:53.781752 systemd[1]: Started cri-containerd-d72c9f9e7cfce0a2bff7038c574ea82f8479950ae0c4dab7ec2ab6e2cd0457a3.scope - libcontainer container d72c9f9e7cfce0a2bff7038c574ea82f8479950ae0c4dab7ec2ab6e2cd0457a3. Feb 14 00:51:53.861855 containerd[1509]: time="2025-02-14T00:51:53.861781562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6496cfd596-sq7gg,Uid:55b08cea-7654-47c1-9665-600b1dcc7097,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4f4d3e5cea89430f3949be8a3686e34b6197b2e55a9dcee9174ba213d61d85b\"" Feb 14 00:51:53.866368 containerd[1509]: time="2025-02-14T00:51:53.865144362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6p8xl,Uid:b62675ed-ed64-4af0-8892-ab5151d23e6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d72c9f9e7cfce0a2bff7038c574ea82f8479950ae0c4dab7ec2ab6e2cd0457a3\"" Feb 14 00:51:53.868174 containerd[1509]: time="2025-02-14T00:51:53.866995243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 14 00:51:55.216977 kubelet[2641]: E0214 00:51:55.215473 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rfjl" podUID="63e6501f-0b84-4dd0-abbf-bfa62e42e8b0" Feb 14 00:51:55.523294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290338984.mount: Deactivated successfully. Feb 14 00:51:57.094197 containerd[1509]: time="2025-02-14T00:51:57.093180786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 14 00:51:57.094197 containerd[1509]: time="2025-02-14T00:51:57.093537636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:57.097279 containerd[1509]: time="2025-02-14T00:51:57.096343086Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:57.104083 containerd[1509]: time="2025-02-14T00:51:57.103940316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:57.106010 containerd[1509]: time="2025-02-14T00:51:57.105640377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.238109933s" Feb 14 00:51:57.106010 containerd[1509]: time="2025-02-14T00:51:57.105746119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 14 00:51:57.132374 containerd[1509]: time="2025-02-14T00:51:57.132237912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 14 00:51:57.184420 containerd[1509]: time="2025-02-14T00:51:57.183975986Z" level=info msg="CreateContainer within sandbox \"e4f4d3e5cea89430f3949be8a3686e34b6197b2e55a9dcee9174ba213d61d85b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 14 00:51:57.212076 containerd[1509]: time="2025-02-14T00:51:57.210870225Z" level=info msg="CreateContainer within sandbox \"e4f4d3e5cea89430f3949be8a3686e34b6197b2e55a9dcee9174ba213d61d85b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bef628a9777a85d1f154942bf1cfc9fa06f9d866db29abc85a7655302dc0e4c1\"" Feb 14 00:51:57.215073 containerd[1509]: time="2025-02-14T00:51:57.213764558Z" level=info msg="StartContainer for \"bef628a9777a85d1f154942bf1cfc9fa06f9d866db29abc85a7655302dc0e4c1\"" Feb 14 00:51:57.226824 kubelet[2641]: E0214 00:51:57.225565 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rfjl" podUID="63e6501f-0b84-4dd0-abbf-bfa62e42e8b0" Feb 14 00:51:57.328002 systemd[1]: Started cri-containerd-bef628a9777a85d1f154942bf1cfc9fa06f9d866db29abc85a7655302dc0e4c1.scope - libcontainer container bef628a9777a85d1f154942bf1cfc9fa06f9d866db29abc85a7655302dc0e4c1. Feb 14 00:51:57.447813 containerd[1509]: time="2025-02-14T00:51:57.446944369Z" level=info msg="StartContainer for \"bef628a9777a85d1f154942bf1cfc9fa06f9d866db29abc85a7655302dc0e4c1\" returns successfully" Feb 14 00:51:58.387516 kubelet[2641]: I0214 00:51:58.386039 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6496cfd596-sq7gg" podStartSLOduration=3.118929741 podStartE2EDuration="6.386016692s" podCreationTimestamp="2025-02-14 00:51:52 +0000 UTC" firstStartedPulling="2025-02-14 00:51:53.863827094 +0000 UTC m=+12.853993973" lastFinishedPulling="2025-02-14 00:51:57.130914032 +0000 UTC m=+16.121080924" observedRunningTime="2025-02-14 00:51:58.383598271 +0000 UTC m=+17.373765163" watchObservedRunningTime="2025-02-14 00:51:58.386016692 +0000 UTC m=+17.376183573" Feb 14 00:51:58.442687 kubelet[2641]: E0214 00:51:58.442638 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.442687 kubelet[2641]: W0214 00:51:58.442677 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.442974 kubelet[2641]: E0214 00:51:58.442715 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.443085 kubelet[2641]: E0214 00:51:58.443040 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.443085 kubelet[2641]: W0214 00:51:58.443055 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.443085 kubelet[2641]: E0214 00:51:58.443071 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.443332 kubelet[2641]: E0214 00:51:58.443312 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.443332 kubelet[2641]: W0214 00:51:58.443332 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.443492 kubelet[2641]: E0214 00:51:58.443347 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.443628 kubelet[2641]: E0214 00:51:58.443609 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.443628 kubelet[2641]: W0214 00:51:58.443627 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.443748 kubelet[2641]: E0214 00:51:58.443642 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.443918 kubelet[2641]: E0214 00:51:58.443890 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.444081 kubelet[2641]: W0214 00:51:58.443927 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.444081 kubelet[2641]: E0214 00:51:58.443946 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.444247 kubelet[2641]: E0214 00:51:58.444194 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.444247 kubelet[2641]: W0214 00:51:58.444209 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.444247 kubelet[2641]: E0214 00:51:58.444232 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.444508 kubelet[2641]: E0214 00:51:58.444489 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.444508 kubelet[2641]: W0214 00:51:58.444508 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.444641 kubelet[2641]: E0214 00:51:58.444523 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.444786 kubelet[2641]: E0214 00:51:58.444767 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.444786 kubelet[2641]: W0214 00:51:58.444785 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.444925 kubelet[2641]: E0214 00:51:58.444800 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.445081 kubelet[2641]: E0214 00:51:58.445061 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.445081 kubelet[2641]: W0214 00:51:58.445080 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.445191 kubelet[2641]: E0214 00:51:58.445096 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.445337 kubelet[2641]: E0214 00:51:58.445318 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.445337 kubelet[2641]: W0214 00:51:58.445335 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.445519 kubelet[2641]: E0214 00:51:58.445350 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.445618 kubelet[2641]: E0214 00:51:58.445596 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.445618 kubelet[2641]: W0214 00:51:58.445609 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.445618 kubelet[2641]: E0214 00:51:58.445622 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.445919 kubelet[2641]: E0214 00:51:58.445892 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.445985 kubelet[2641]: W0214 00:51:58.445920 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.445985 kubelet[2641]: E0214 00:51:58.445937 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.446189 kubelet[2641]: E0214 00:51:58.446170 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.446189 kubelet[2641]: W0214 00:51:58.446188 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.446309 kubelet[2641]: E0214 00:51:58.446203 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.446495 kubelet[2641]: E0214 00:51:58.446475 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.446495 kubelet[2641]: W0214 00:51:58.446494 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.446623 kubelet[2641]: E0214 00:51:58.446509 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.446749 kubelet[2641]: E0214 00:51:58.446730 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.446749 kubelet[2641]: W0214 00:51:58.446748 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.446863 kubelet[2641]: E0214 00:51:58.446764 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.452349 kubelet[2641]: E0214 00:51:58.452252 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.452455 kubelet[2641]: W0214 00:51:58.452349 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.452455 kubelet[2641]: E0214 00:51:58.452371 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.452772 kubelet[2641]: E0214 00:51:58.452750 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.452772 kubelet[2641]: W0214 00:51:58.452771 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.452884 kubelet[2641]: E0214 00:51:58.452795 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.453100 kubelet[2641]: E0214 00:51:58.453079 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.453100 kubelet[2641]: W0214 00:51:58.453099 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.453204 kubelet[2641]: E0214 00:51:58.453133 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.453517 kubelet[2641]: E0214 00:51:58.453493 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.453517 kubelet[2641]: W0214 00:51:58.453515 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.453681 kubelet[2641]: E0214 00:51:58.453540 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.453820 kubelet[2641]: E0214 00:51:58.453803 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.453886 kubelet[2641]: W0214 00:51:58.453822 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.453972 kubelet[2641]: E0214 00:51:58.453925 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.454196 kubelet[2641]: E0214 00:51:58.454175 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.454196 kubelet[2641]: W0214 00:51:58.454196 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.454434 kubelet[2641]: E0214 00:51:58.454345 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.454530 kubelet[2641]: E0214 00:51:58.454467 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.454530 kubelet[2641]: W0214 00:51:58.454494 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.454808 kubelet[2641]: E0214 00:51:58.454729 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.454873 kubelet[2641]: E0214 00:51:58.454849 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.454873 kubelet[2641]: W0214 00:51:58.454863 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.455069 kubelet[2641]: E0214 00:51:58.454914 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.455225 kubelet[2641]: E0214 00:51:58.455204 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.455225 kubelet[2641]: W0214 00:51:58.455223 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.455335 kubelet[2641]: E0214 00:51:58.455258 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.455885 kubelet[2641]: E0214 00:51:58.455853 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.455885 kubelet[2641]: W0214 00:51:58.455880 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.456018 kubelet[2641]: E0214 00:51:58.455898 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.456224 kubelet[2641]: E0214 00:51:58.456183 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.456224 kubelet[2641]: W0214 00:51:58.456216 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.456344 kubelet[2641]: E0214 00:51:58.456233 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.456566 kubelet[2641]: E0214 00:51:58.456537 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.456566 kubelet[2641]: W0214 00:51:58.456561 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.456959 kubelet[2641]: E0214 00:51:58.456713 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.456959 kubelet[2641]: E0214 00:51:58.456805 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.456959 kubelet[2641]: W0214 00:51:58.456818 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.456959 kubelet[2641]: E0214 00:51:58.456833 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.457174 kubelet[2641]: E0214 00:51:58.457118 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.457174 kubelet[2641]: W0214 00:51:58.457133 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.457367 kubelet[2641]: E0214 00:51:58.457166 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.457540 kubelet[2641]: E0214 00:51:58.457497 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.457540 kubelet[2641]: W0214 00:51:58.457511 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.457622 kubelet[2641]: E0214 00:51:58.457544 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.457819 kubelet[2641]: E0214 00:51:58.457800 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.457819 kubelet[2641]: W0214 00:51:58.457819 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.457945 kubelet[2641]: E0214 00:51:58.457834 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.458128 kubelet[2641]: E0214 00:51:58.458108 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.458189 kubelet[2641]: W0214 00:51:58.458128 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.458189 kubelet[2641]: E0214 00:51:58.458144 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.458898 kubelet[2641]: E0214 00:51:58.458877 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:51:58.458898 kubelet[2641]: W0214 00:51:58.458897 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:51:58.459046 kubelet[2641]: E0214 00:51:58.458924 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:51:58.926556 containerd[1509]: time="2025-02-14T00:51:58.926491304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:58.928819 containerd[1509]: time="2025-02-14T00:51:58.928535663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 14 00:51:58.929788 containerd[1509]: time="2025-02-14T00:51:58.929721444Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:58.951522 containerd[1509]: time="2025-02-14T00:51:58.951441982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:51:58.952823 containerd[1509]: time="2025-02-14T00:51:58.952735082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.81946253s" Feb 14 00:51:58.953068 containerd[1509]: time="2025-02-14T00:51:58.952958080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 14 00:51:58.956284 containerd[1509]: time="2025-02-14T00:51:58.956076458Z" level=info msg="CreateContainer within sandbox \"d72c9f9e7cfce0a2bff7038c574ea82f8479950ae0c4dab7ec2ab6e2cd0457a3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 14 00:51:58.974936 containerd[1509]: time="2025-02-14T00:51:58.974761121Z" level=info msg="CreateContainer within sandbox \"d72c9f9e7cfce0a2bff7038c574ea82f8479950ae0c4dab7ec2ab6e2cd0457a3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"907ac85b5906241bc2d6d225f70e0a20072c5358b96a16f6889ee17fda1f27fe\"" Feb 14 00:51:58.977175 containerd[1509]: time="2025-02-14T00:51:58.976749720Z" level=info msg="StartContainer for \"907ac85b5906241bc2d6d225f70e0a20072c5358b96a16f6889ee17fda1f27fe\"" Feb 14 00:51:59.040749 systemd[1]: Started cri-containerd-907ac85b5906241bc2d6d225f70e0a20072c5358b96a16f6889ee17fda1f27fe.scope - libcontainer container 907ac85b5906241bc2d6d225f70e0a20072c5358b96a16f6889ee17fda1f27fe. Feb 14 00:51:59.085670 containerd[1509]: time="2025-02-14T00:51:59.085581620Z" level=info msg="StartContainer for \"907ac85b5906241bc2d6d225f70e0a20072c5358b96a16f6889ee17fda1f27fe\" returns successfully" Feb 14 00:51:59.116725 systemd[1]: cri-containerd-907ac85b5906241bc2d6d225f70e0a20072c5358b96a16f6889ee17fda1f27fe.scope: Deactivated successfully. Feb 14 00:51:59.160366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-907ac85b5906241bc2d6d225f70e0a20072c5358b96a16f6889ee17fda1f27fe-rootfs.mount: Deactivated successfully. Feb 14 00:51:59.219734 kubelet[2641]: E0214 00:51:59.217734 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rfjl" podUID="63e6501f-0b84-4dd0-abbf-bfa62e42e8b0" Feb 14 00:51:59.234607 containerd[1509]: time="2025-02-14T00:51:59.178995027Z" level=info msg="shim disconnected" id=907ac85b5906241bc2d6d225f70e0a20072c5358b96a16f6889ee17fda1f27fe namespace=k8s.io Feb 14 00:51:59.235199 containerd[1509]: time="2025-02-14T00:51:59.234787902Z" level=warning msg="cleaning up after shim disconnected" id=907ac85b5906241bc2d6d225f70e0a20072c5358b96a16f6889ee17fda1f27fe namespace=k8s.io Feb 14 00:51:59.235199 containerd[1509]: time="2025-02-14T00:51:59.234821461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:51:59.362106 kubelet[2641]: I0214 00:51:59.361754 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 00:51:59.364149 containerd[1509]: time="2025-02-14T00:51:59.364099026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 14 00:52:01.217058 kubelet[2641]: E0214 00:52:01.216351 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rfjl" podUID="63e6501f-0b84-4dd0-abbf-bfa62e42e8b0" Feb 14 00:52:03.217220 kubelet[2641]: E0214 00:52:03.215911 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rfjl" podUID="63e6501f-0b84-4dd0-abbf-bfa62e42e8b0" Feb 14 00:52:05.218310 kubelet[2641]: E0214 00:52:05.218187 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rfjl" podUID="63e6501f-0b84-4dd0-abbf-bfa62e42e8b0" Feb 14 00:52:06.303486 containerd[1509]: time="2025-02-14T00:52:06.303431493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:06.310468 containerd[1509]: time="2025-02-14T00:52:06.310243687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 14 00:52:06.317557 containerd[1509]: time="2025-02-14T00:52:06.317374024Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:06.320606 containerd[1509]: time="2025-02-14T00:52:06.320569025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:06.322405 containerd[1509]: time="2025-02-14T00:52:06.321814506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.957654595s" Feb 14 00:52:06.322405 containerd[1509]: time="2025-02-14T00:52:06.321870788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 14 00:52:06.326213 containerd[1509]: time="2025-02-14T00:52:06.326088449Z" level=info msg="CreateContainer within sandbox \"d72c9f9e7cfce0a2bff7038c574ea82f8479950ae0c4dab7ec2ab6e2cd0457a3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 14 00:52:06.404956 containerd[1509]: time="2025-02-14T00:52:06.404607470Z" level=info msg="CreateContainer within sandbox \"d72c9f9e7cfce0a2bff7038c574ea82f8479950ae0c4dab7ec2ab6e2cd0457a3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"43d55e8fc1d82e431d0c2abaca0982657c1dabcf2e7ed53176f7ada2c8c0d584\"" Feb 14 00:52:06.405923 containerd[1509]: time="2025-02-14T00:52:06.405587562Z" level=info msg="StartContainer for \"43d55e8fc1d82e431d0c2abaca0982657c1dabcf2e7ed53176f7ada2c8c0d584\"" Feb 14 00:52:06.476303 systemd[1]: run-containerd-runc-k8s.io-43d55e8fc1d82e431d0c2abaca0982657c1dabcf2e7ed53176f7ada2c8c0d584-runc.TBtg5j.mount: Deactivated successfully. Feb 14 00:52:06.485828 systemd[1]: Started cri-containerd-43d55e8fc1d82e431d0c2abaca0982657c1dabcf2e7ed53176f7ada2c8c0d584.scope - libcontainer container 43d55e8fc1d82e431d0c2abaca0982657c1dabcf2e7ed53176f7ada2c8c0d584. Feb 14 00:52:06.536575 containerd[1509]: time="2025-02-14T00:52:06.536317305Z" level=info msg="StartContainer for \"43d55e8fc1d82e431d0c2abaca0982657c1dabcf2e7ed53176f7ada2c8c0d584\" returns successfully" Feb 14 00:52:07.217637 kubelet[2641]: E0214 00:52:07.215972 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rfjl" podUID="63e6501f-0b84-4dd0-abbf-bfa62e42e8b0" Feb 14 00:52:07.678146 systemd[1]: cri-containerd-43d55e8fc1d82e431d0c2abaca0982657c1dabcf2e7ed53176f7ada2c8c0d584.scope: Deactivated successfully. Feb 14 00:52:07.721055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43d55e8fc1d82e431d0c2abaca0982657c1dabcf2e7ed53176f7ada2c8c0d584-rootfs.mount: Deactivated successfully. Feb 14 00:52:07.725417 containerd[1509]: time="2025-02-14T00:52:07.724804689Z" level=info msg="shim disconnected" id=43d55e8fc1d82e431d0c2abaca0982657c1dabcf2e7ed53176f7ada2c8c0d584 namespace=k8s.io Feb 14 00:52:07.725417 containerd[1509]: time="2025-02-14T00:52:07.725030442Z" level=warning msg="cleaning up after shim disconnected" id=43d55e8fc1d82e431d0c2abaca0982657c1dabcf2e7ed53176f7ada2c8c0d584 namespace=k8s.io Feb 14 00:52:07.725417 containerd[1509]: time="2025-02-14T00:52:07.725048278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:52:07.750418 kubelet[2641]: I0214 00:52:07.747407 2641 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 14 00:52:07.801961 systemd[1]: Created slice kubepods-burstable-pod421f9c58_7a2f_4ef5_8284_a6c9420a9ad4.slice - libcontainer container kubepods-burstable-pod421f9c58_7a2f_4ef5_8284_a6c9420a9ad4.slice. Feb 14 00:52:07.812376 kubelet[2641]: W0214 00:52:07.812319 2641 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:srv-2zttm.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-2zttm.gb1.brightbox.com' and this object Feb 14 00:52:07.813243 systemd[1]: Created slice kubepods-besteffort-pod5a23e5d8_3212_4b6f_b57a_e861b760ed5a.slice - libcontainer container kubepods-besteffort-pod5a23e5d8_3212_4b6f_b57a_e861b760ed5a.slice. Feb 14 00:52:07.814274 kubelet[2641]: E0214 00:52:07.813798 2641 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:srv-2zttm.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-2zttm.gb1.brightbox.com' and this object" logger="UnhandledError" Feb 14 00:52:07.823571 systemd[1]: Created slice kubepods-burstable-podf57ee0d6_5cc3_4e98_9d2a_d8690b89184b.slice - libcontainer container kubepods-burstable-podf57ee0d6_5cc3_4e98_9d2a_d8690b89184b.slice. Feb 14 00:52:07.835811 systemd[1]: Created slice kubepods-besteffort-podc282c195_46d8_48e7_ac08_deb4910a1446.slice - libcontainer container kubepods-besteffort-podc282c195_46d8_48e7_ac08_deb4910a1446.slice. Feb 14 00:52:07.844808 systemd[1]: Created slice kubepods-besteffort-pod899cb3f5_312b_45b1_b391_beaf3be22e8b.slice - libcontainer container kubepods-besteffort-pod899cb3f5_312b_45b1_b391_beaf3be22e8b.slice. Feb 14 00:52:07.927575 kubelet[2641]: I0214 00:52:07.927070 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/421f9c58-7a2f-4ef5-8284-a6c9420a9ad4-config-volume\") pod \"coredns-6f6b679f8f-5rrrv\" (UID: \"421f9c58-7a2f-4ef5-8284-a6c9420a9ad4\") " pod="kube-system/coredns-6f6b679f8f-5rrrv" Feb 14 00:52:07.927575 kubelet[2641]: I0214 00:52:07.927147 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f57ee0d6-5cc3-4e98-9d2a-d8690b89184b-config-volume\") pod \"coredns-6f6b679f8f-sw62j\" (UID: \"f57ee0d6-5cc3-4e98-9d2a-d8690b89184b\") " pod="kube-system/coredns-6f6b679f8f-sw62j" Feb 14 00:52:07.927575 kubelet[2641]: I0214 00:52:07.927185 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/899cb3f5-312b-45b1-b391-beaf3be22e8b-calico-apiserver-certs\") pod \"calico-apiserver-77fc7c7db4-nz6x9\" (UID: \"899cb3f5-312b-45b1-b391-beaf3be22e8b\") " pod="calico-apiserver/calico-apiserver-77fc7c7db4-nz6x9" Feb 14 00:52:07.927575 kubelet[2641]: I0214 00:52:07.927219 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a23e5d8-3212-4b6f-b57a-e861b760ed5a-tigera-ca-bundle\") pod \"calico-kube-controllers-98789948-gtrxm\" (UID: \"5a23e5d8-3212-4b6f-b57a-e861b760ed5a\") " pod="calico-system/calico-kube-controllers-98789948-gtrxm" Feb 14 00:52:07.927575 kubelet[2641]: I0214 00:52:07.927253 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c282c195-46d8-48e7-ac08-deb4910a1446-calico-apiserver-certs\") pod \"calico-apiserver-77fc7c7db4-g66xr\" (UID: \"c282c195-46d8-48e7-ac08-deb4910a1446\") " pod="calico-apiserver/calico-apiserver-77fc7c7db4-g66xr" Feb 14 00:52:07.928025 kubelet[2641]: I0214 00:52:07.927284 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvcs8\" (UniqueName: \"kubernetes.io/projected/c282c195-46d8-48e7-ac08-deb4910a1446-kube-api-access-gvcs8\") pod \"calico-apiserver-77fc7c7db4-g66xr\" (UID: \"c282c195-46d8-48e7-ac08-deb4910a1446\") " pod="calico-apiserver/calico-apiserver-77fc7c7db4-g66xr" Feb 14 00:52:07.928025 kubelet[2641]: I0214 00:52:07.927314 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nxsz\" (UniqueName: \"kubernetes.io/projected/f57ee0d6-5cc3-4e98-9d2a-d8690b89184b-kube-api-access-5nxsz\") pod \"coredns-6f6b679f8f-sw62j\" (UID: \"f57ee0d6-5cc3-4e98-9d2a-d8690b89184b\") " pod="kube-system/coredns-6f6b679f8f-sw62j" Feb 14 00:52:07.928025 kubelet[2641]: I0214 00:52:07.927343 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8k22\" (UniqueName: \"kubernetes.io/projected/899cb3f5-312b-45b1-b391-beaf3be22e8b-kube-api-access-v8k22\") pod \"calico-apiserver-77fc7c7db4-nz6x9\" (UID: \"899cb3f5-312b-45b1-b391-beaf3be22e8b\") " pod="calico-apiserver/calico-apiserver-77fc7c7db4-nz6x9" Feb 14 00:52:07.928025 kubelet[2641]: I0214 00:52:07.927375 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfsk7\" (UniqueName: \"kubernetes.io/projected/5a23e5d8-3212-4b6f-b57a-e861b760ed5a-kube-api-access-hfsk7\") pod \"calico-kube-controllers-98789948-gtrxm\" (UID: \"5a23e5d8-3212-4b6f-b57a-e861b760ed5a\") " pod="calico-system/calico-kube-controllers-98789948-gtrxm" Feb 14 00:52:07.928025 kubelet[2641]: I0214 00:52:07.927432 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vblsn\" (UniqueName: \"kubernetes.io/projected/421f9c58-7a2f-4ef5-8284-a6c9420a9ad4-kube-api-access-vblsn\") pod \"coredns-6f6b679f8f-5rrrv\" (UID: \"421f9c58-7a2f-4ef5-8284-a6c9420a9ad4\") " pod="kube-system/coredns-6f6b679f8f-5rrrv" Feb 14 00:52:08.109698 containerd[1509]: time="2025-02-14T00:52:08.109602851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5rrrv,Uid:421f9c58-7a2f-4ef5-8284-a6c9420a9ad4,Namespace:kube-system,Attempt:0,}" Feb 14 00:52:08.120005 containerd[1509]: time="2025-02-14T00:52:08.119611013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98789948-gtrxm,Uid:5a23e5d8-3212-4b6f-b57a-e861b760ed5a,Namespace:calico-system,Attempt:0,}" Feb 14 00:52:08.131843 containerd[1509]: time="2025-02-14T00:52:08.131613931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sw62j,Uid:f57ee0d6-5cc3-4e98-9d2a-d8690b89184b,Namespace:kube-system,Attempt:0,}" Feb 14 00:52:08.430895 containerd[1509]: time="2025-02-14T00:52:08.430708190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 14 00:52:08.508410 containerd[1509]: time="2025-02-14T00:52:08.507893408Z" level=error msg="Failed to destroy network for sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.511635 containerd[1509]: time="2025-02-14T00:52:08.511592258Z" level=error msg="Failed to destroy network for sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.514832 containerd[1509]: time="2025-02-14T00:52:08.514752867Z" level=error msg="encountered an error cleaning up failed sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.514901 containerd[1509]: time="2025-02-14T00:52:08.514849077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sw62j,Uid:f57ee0d6-5cc3-4e98-9d2a-d8690b89184b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.516037 containerd[1509]: time="2025-02-14T00:52:08.515995988Z" level=error msg="Failed to destroy network for sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.516590 containerd[1509]: time="2025-02-14T00:52:08.516549889Z" level=error msg="encountered an error cleaning up failed sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.516983 containerd[1509]: time="2025-02-14T00:52:08.516721261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5rrrv,Uid:421f9c58-7a2f-4ef5-8284-a6c9420a9ad4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.521090 containerd[1509]: time="2025-02-14T00:52:08.520655452Z" level=error msg="encountered an error cleaning up failed sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.521090 containerd[1509]: time="2025-02-14T00:52:08.520722713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98789948-gtrxm,Uid:5a23e5d8-3212-4b6f-b57a-e861b760ed5a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.525634 kubelet[2641]: E0214 00:52:08.521513 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.525634 kubelet[2641]: E0214 00:52:08.521650 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sw62j" Feb 14 00:52:08.525634 kubelet[2641]: E0214 00:52:08.521686 2641 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sw62j" Feb 14 00:52:08.528555 kubelet[2641]: E0214 00:52:08.521782 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-sw62j_kube-system(f57ee0d6-5cc3-4e98-9d2a-d8690b89184b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-sw62j_kube-system(f57ee0d6-5cc3-4e98-9d2a-d8690b89184b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sw62j" podUID="f57ee0d6-5cc3-4e98-9d2a-d8690b89184b" Feb 14 00:52:08.528555 kubelet[2641]: E0214 00:52:08.525934 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.528555 kubelet[2641]: E0214 00:52:08.525995 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5rrrv" Feb 14 00:52:08.528748 kubelet[2641]: E0214 00:52:08.526022 2641 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5rrrv" Feb 14 00:52:08.528748 kubelet[2641]: E0214 00:52:08.526071 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5rrrv_kube-system(421f9c58-7a2f-4ef5-8284-a6c9420a9ad4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5rrrv_kube-system(421f9c58-7a2f-4ef5-8284-a6c9420a9ad4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5rrrv" podUID="421f9c58-7a2f-4ef5-8284-a6c9420a9ad4" Feb 14 00:52:08.528748 kubelet[2641]: E0214 00:52:08.526156 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:08.529417 kubelet[2641]: E0214 00:52:08.526194 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-98789948-gtrxm" Feb 14 00:52:08.529417 kubelet[2641]: E0214 00:52:08.526218 2641 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-98789948-gtrxm" Feb 14 00:52:08.529417 kubelet[2641]: E0214 00:52:08.526255 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-98789948-gtrxm_calico-system(5a23e5d8-3212-4b6f-b57a-e861b760ed5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-98789948-gtrxm_calico-system(5a23e5d8-3212-4b6f-b57a-e861b760ed5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-98789948-gtrxm" podUID="5a23e5d8-3212-4b6f-b57a-e861b760ed5a" Feb 14 00:52:08.723371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb-shm.mount: Deactivated successfully. Feb 14 00:52:09.041087 containerd[1509]: time="2025-02-14T00:52:09.040890999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc7c7db4-g66xr,Uid:c282c195-46d8-48e7-ac08-deb4910a1446,Namespace:calico-apiserver,Attempt:0,}" Feb 14 00:52:09.050017 containerd[1509]: time="2025-02-14T00:52:09.049730528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc7c7db4-nz6x9,Uid:899cb3f5-312b-45b1-b391-beaf3be22e8b,Namespace:calico-apiserver,Attempt:0,}" Feb 14 00:52:09.186463 containerd[1509]: time="2025-02-14T00:52:09.186357500Z" level=error msg="Failed to destroy network for sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.188392 containerd[1509]: time="2025-02-14T00:52:09.187030314Z" level=error msg="encountered an error cleaning up failed sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.188392 containerd[1509]: time="2025-02-14T00:52:09.187151988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc7c7db4-nz6x9,Uid:899cb3f5-312b-45b1-b391-beaf3be22e8b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.188635 kubelet[2641]: E0214 00:52:09.187524 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.188635 kubelet[2641]: E0214 00:52:09.187618 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fc7c7db4-nz6x9" Feb 14 00:52:09.188635 kubelet[2641]: E0214 00:52:09.187651 2641 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fc7c7db4-nz6x9" Feb 14 00:52:09.188835 kubelet[2641]: E0214 00:52:09.187730 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77fc7c7db4-nz6x9_calico-apiserver(899cb3f5-312b-45b1-b391-beaf3be22e8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77fc7c7db4-nz6x9_calico-apiserver(899cb3f5-312b-45b1-b391-beaf3be22e8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77fc7c7db4-nz6x9" podUID="899cb3f5-312b-45b1-b391-beaf3be22e8b" Feb 14 00:52:09.192783 containerd[1509]: time="2025-02-14T00:52:09.192674509Z" level=error msg="Failed to destroy network for sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.193913 containerd[1509]: time="2025-02-14T00:52:09.193666131Z" level=error msg="encountered an error cleaning up failed sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.194000 containerd[1509]: time="2025-02-14T00:52:09.193903152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc7c7db4-g66xr,Uid:c282c195-46d8-48e7-ac08-deb4910a1446,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.194699 kubelet[2641]: E0214 00:52:09.194363 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.194699 kubelet[2641]: E0214 00:52:09.194551 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fc7c7db4-g66xr" Feb 14 00:52:09.194699 kubelet[2641]: E0214 00:52:09.194583 2641 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fc7c7db4-g66xr" Feb 14 00:52:09.195277 kubelet[2641]: E0214 00:52:09.194645 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77fc7c7db4-g66xr_calico-apiserver(c282c195-46d8-48e7-ac08-deb4910a1446)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77fc7c7db4-g66xr_calico-apiserver(c282c195-46d8-48e7-ac08-deb4910a1446)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77fc7c7db4-g66xr" podUID="c282c195-46d8-48e7-ac08-deb4910a1446" Feb 14 00:52:09.225179 systemd[1]: Created slice kubepods-besteffort-pod63e6501f_0b84_4dd0_abbf_bfa62e42e8b0.slice - libcontainer container kubepods-besteffort-pod63e6501f_0b84_4dd0_abbf_bfa62e42e8b0.slice. Feb 14 00:52:09.229578 containerd[1509]: time="2025-02-14T00:52:09.229512583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rfjl,Uid:63e6501f-0b84-4dd0-abbf-bfa62e42e8b0,Namespace:calico-system,Attempt:0,}" Feb 14 00:52:09.325979 containerd[1509]: time="2025-02-14T00:52:09.325902673Z" level=error msg="Failed to destroy network for sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.326859 containerd[1509]: time="2025-02-14T00:52:09.326513679Z" level=error msg="encountered an error cleaning up failed sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.326859 containerd[1509]: time="2025-02-14T00:52:09.326591751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rfjl,Uid:63e6501f-0b84-4dd0-abbf-bfa62e42e8b0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.327330 kubelet[2641]: E0214 00:52:09.326989 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.327330 kubelet[2641]: E0214 00:52:09.327070 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6rfjl" Feb 14 00:52:09.327330 kubelet[2641]: E0214 00:52:09.327142 2641 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6rfjl" Feb 14 00:52:09.328279 kubelet[2641]: E0214 00:52:09.327329 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6rfjl_calico-system(63e6501f-0b84-4dd0-abbf-bfa62e42e8b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6rfjl_calico-system(63e6501f-0b84-4dd0-abbf-bfa62e42e8b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6rfjl" podUID="63e6501f-0b84-4dd0-abbf-bfa62e42e8b0" Feb 14 00:52:09.429423 kubelet[2641]: I0214 00:52:09.429246 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:09.433183 kubelet[2641]: I0214 00:52:09.432559 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:09.436504 containerd[1509]: time="2025-02-14T00:52:09.436451526Z" level=info msg="StopPodSandbox for \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\"" Feb 14 00:52:09.437632 containerd[1509]: time="2025-02-14T00:52:09.437352625Z" level=info msg="StopPodSandbox for \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\"" Feb 14 00:52:09.438558 containerd[1509]: time="2025-02-14T00:52:09.438528170Z" level=info msg="Ensure that sandbox 69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c in task-service has been cleanup successfully" Feb 14 00:52:09.439468 containerd[1509]: time="2025-02-14T00:52:09.438611810Z" level=info msg="Ensure that sandbox e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f in task-service has been cleanup successfully" Feb 14 00:52:09.444231 kubelet[2641]: I0214 00:52:09.444195 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:09.446039 containerd[1509]: time="2025-02-14T00:52:09.445996047Z" level=info msg="StopPodSandbox for \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\"" Feb 14 00:52:09.446575 containerd[1509]: time="2025-02-14T00:52:09.446262900Z" level=info msg="Ensure that sandbox bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7 in task-service has been cleanup successfully" Feb 14 00:52:09.449488 kubelet[2641]: I0214 00:52:09.449337 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:09.451833 containerd[1509]: time="2025-02-14T00:52:09.451202110Z" level=info msg="StopPodSandbox for \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\"" Feb 14 00:52:09.451833 containerd[1509]: time="2025-02-14T00:52:09.451474671Z" level=info msg="Ensure that sandbox 2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb in task-service has been cleanup successfully" Feb 14 00:52:09.457466 kubelet[2641]: I0214 00:52:09.456916 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:09.458809 containerd[1509]: time="2025-02-14T00:52:09.458770384Z" level=info msg="StopPodSandbox for \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\"" Feb 14 00:52:09.460225 containerd[1509]: time="2025-02-14T00:52:09.459978898Z" level=info msg="Ensure that sandbox 2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6 in task-service has been cleanup successfully" Feb 14 00:52:09.468465 kubelet[2641]: I0214 00:52:09.468422 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:09.470417 containerd[1509]: time="2025-02-14T00:52:09.469727561Z" level=info msg="StopPodSandbox for \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\"" Feb 14 00:52:09.474615 containerd[1509]: time="2025-02-14T00:52:09.474460417Z" level=info msg="Ensure that sandbox 8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54 in task-service has been cleanup successfully" Feb 14 00:52:09.579541 containerd[1509]: time="2025-02-14T00:52:09.579286929Z" level=error msg="StopPodSandbox for \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\" failed" error="failed to destroy network for sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.580984 kubelet[2641]: E0214 00:52:09.580249 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:09.580984 kubelet[2641]: E0214 00:52:09.580681 2641 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6"} Feb 14 00:52:09.580984 kubelet[2641]: E0214 00:52:09.580895 2641 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f57ee0d6-5cc3-4e98-9d2a-d8690b89184b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:52:09.580984 kubelet[2641]: E0214 00:52:09.580935 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f57ee0d6-5cc3-4e98-9d2a-d8690b89184b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sw62j" podUID="f57ee0d6-5cc3-4e98-9d2a-d8690b89184b" Feb 14 00:52:09.582655 containerd[1509]: time="2025-02-14T00:52:09.580875306Z" level=error msg="StopPodSandbox for \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\" failed" error="failed to destroy network for sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.582716 kubelet[2641]: E0214 00:52:09.581505 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:09.582716 kubelet[2641]: E0214 00:52:09.582192 2641 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f"} Feb 14 00:52:09.582716 kubelet[2641]: E0214 00:52:09.582233 2641 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"899cb3f5-312b-45b1-b391-beaf3be22e8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:52:09.582716 kubelet[2641]: E0214 00:52:09.582288 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"899cb3f5-312b-45b1-b391-beaf3be22e8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77fc7c7db4-nz6x9" podUID="899cb3f5-312b-45b1-b391-beaf3be22e8b" Feb 14 00:52:09.604819 containerd[1509]: time="2025-02-14T00:52:09.604533764Z" level=error msg="StopPodSandbox for \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\" failed" error="failed to destroy network for sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.605397 kubelet[2641]: E0214 00:52:09.605175 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:09.605397 kubelet[2641]: E0214 00:52:09.605274 2641 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb"} Feb 14 00:52:09.605397 kubelet[2641]: E0214 00:52:09.605341 2641 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"421f9c58-7a2f-4ef5-8284-a6c9420a9ad4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:52:09.606522 kubelet[2641]: E0214 00:52:09.605440 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"421f9c58-7a2f-4ef5-8284-a6c9420a9ad4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5rrrv" podUID="421f9c58-7a2f-4ef5-8284-a6c9420a9ad4" Feb 14 00:52:09.616416 containerd[1509]: time="2025-02-14T00:52:09.616079344Z" level=error msg="StopPodSandbox for \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\" failed" error="failed to destroy network for sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.617079 kubelet[2641]: E0214 00:52:09.616849 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:09.617200 kubelet[2641]: E0214 00:52:09.617124 2641 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7"} Feb 14 00:52:09.617259 kubelet[2641]: E0214 00:52:09.617205 2641 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a23e5d8-3212-4b6f-b57a-e861b760ed5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:52:09.617527 kubelet[2641]: E0214 00:52:09.617250 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a23e5d8-3212-4b6f-b57a-e861b760ed5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-98789948-gtrxm" podUID="5a23e5d8-3212-4b6f-b57a-e861b760ed5a" Feb 14 00:52:09.620789 containerd[1509]: time="2025-02-14T00:52:09.620708606Z" level=error msg="StopPodSandbox for \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\" failed" error="failed to destroy network for sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.621297 kubelet[2641]: E0214 00:52:09.621040 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:09.621297 kubelet[2641]: E0214 00:52:09.621093 2641 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c"} Feb 14 00:52:09.621297 kubelet[2641]: E0214 00:52:09.621132 2641 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c282c195-46d8-48e7-ac08-deb4910a1446\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:52:09.621297 kubelet[2641]: E0214 00:52:09.621164 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c282c195-46d8-48e7-ac08-deb4910a1446\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77fc7c7db4-g66xr" podUID="c282c195-46d8-48e7-ac08-deb4910a1446" Feb 14 00:52:09.621646 containerd[1509]: time="2025-02-14T00:52:09.621322567Z" level=error msg="StopPodSandbox for \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\" failed" error="failed to destroy network for sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:52:09.621724 kubelet[2641]: E0214 00:52:09.621663 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:09.621724 kubelet[2641]: E0214 00:52:09.621703 2641 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54"} Feb 14 00:52:09.621834 kubelet[2641]: E0214 00:52:09.621764 2641 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63e6501f-0b84-4dd0-abbf-bfa62e42e8b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:52:09.621834 kubelet[2641]: E0214 00:52:09.621796 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63e6501f-0b84-4dd0-abbf-bfa62e42e8b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6rfjl" podUID="63e6501f-0b84-4dd0-abbf-bfa62e42e8b0" Feb 14 00:52:09.721105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f-shm.mount: Deactivated successfully. Feb 14 00:52:09.721277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c-shm.mount: Deactivated successfully. Feb 14 00:52:15.247215 kubelet[2641]: I0214 00:52:15.247161 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 00:52:19.248881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685346898.mount: Deactivated successfully. Feb 14 00:52:19.386732 containerd[1509]: time="2025-02-14T00:52:19.380499700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:19.388106 containerd[1509]: time="2025-02-14T00:52:19.371883357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 14 00:52:19.429981 containerd[1509]: time="2025-02-14T00:52:19.429808690Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:19.432412 containerd[1509]: time="2025-02-14T00:52:19.431667773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:19.435560 containerd[1509]: time="2025-02-14T00:52:19.435491413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.999947136s" Feb 14 00:52:19.435694 containerd[1509]: time="2025-02-14T00:52:19.435564509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 14 00:52:19.501721 containerd[1509]: time="2025-02-14T00:52:19.501354921Z" level=info msg="CreateContainer within sandbox \"d72c9f9e7cfce0a2bff7038c574ea82f8479950ae0c4dab7ec2ab6e2cd0457a3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 14 00:52:19.554363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184712781.mount: Deactivated successfully. Feb 14 00:52:19.571823 containerd[1509]: time="2025-02-14T00:52:19.571658277Z" level=info msg="CreateContainer within sandbox \"d72c9f9e7cfce0a2bff7038c574ea82f8479950ae0c4dab7ec2ab6e2cd0457a3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ebb74de28933bad69675c5be45bd95494255189b6c94cc5587e166192e69f3ca\"" Feb 14 00:52:19.576630 containerd[1509]: time="2025-02-14T00:52:19.576591820Z" level=info msg="StartContainer for \"ebb74de28933bad69675c5be45bd95494255189b6c94cc5587e166192e69f3ca\"" Feb 14 00:52:19.771037 systemd[1]: Started cri-containerd-ebb74de28933bad69675c5be45bd95494255189b6c94cc5587e166192e69f3ca.scope - libcontainer container ebb74de28933bad69675c5be45bd95494255189b6c94cc5587e166192e69f3ca. Feb 14 00:52:19.831466 containerd[1509]: time="2025-02-14T00:52:19.831290342Z" level=info msg="StartContainer for \"ebb74de28933bad69675c5be45bd95494255189b6c94cc5587e166192e69f3ca\" returns successfully" Feb 14 00:52:20.015231 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 14 00:52:20.018646 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 14 00:52:20.218780 containerd[1509]: time="2025-02-14T00:52:20.218704276Z" level=info msg="StopPodSandbox for \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\"" Feb 14 00:52:20.219996 containerd[1509]: time="2025-02-14T00:52:20.219350217Z" level=info msg="StopPodSandbox for \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\"" Feb 14 00:52:20.220904 containerd[1509]: time="2025-02-14T00:52:20.220840329Z" level=info msg="StopPodSandbox for \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\"" Feb 14 00:52:20.223751 containerd[1509]: time="2025-02-14T00:52:20.218705623Z" level=info msg="StopPodSandbox for \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\"" Feb 14 00:52:20.685046 kubelet[2641]: I0214 00:52:20.670351 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6p8xl" podStartSLOduration=2.054534779 podStartE2EDuration="27.624028051s" podCreationTimestamp="2025-02-14 00:51:53 +0000 UTC" firstStartedPulling="2025-02-14 00:51:53.867457325 +0000 UTC m=+12.857624204" lastFinishedPulling="2025-02-14 00:52:19.436950601 +0000 UTC m=+38.427117476" observedRunningTime="2025-02-14 00:52:20.623528982 +0000 UTC m=+39.613695910" watchObservedRunningTime="2025-02-14 00:52:20.624028051 +0000 UTC m=+39.614194930" Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.485 [INFO][3756] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.485 [INFO][3756] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" iface="eth0" netns="/var/run/netns/cni-562e2bf2-3361-b702-a71a-23a5cdb45c4d" Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.488 [INFO][3756] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" iface="eth0" netns="/var/run/netns/cni-562e2bf2-3361-b702-a71a-23a5cdb45c4d" Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.491 [INFO][3756] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" iface="eth0" netns="/var/run/netns/cni-562e2bf2-3361-b702-a71a-23a5cdb45c4d" Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.492 [INFO][3756] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.492 [INFO][3756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.745 [INFO][3781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" HandleID="k8s-pod-network.8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.750 [INFO][3781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.753 [INFO][3781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.795 [WARNING][3781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" HandleID="k8s-pod-network.8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.795 [INFO][3781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" HandleID="k8s-pod-network.8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.799 [INFO][3781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:20.825258 containerd[1509]: 2025-02-14 00:52:20.808 [INFO][3756] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:20.833656 systemd[1]: run-netns-cni\x2d562e2bf2\x2d3361\x2db702\x2da71a\x2d23a5cdb45c4d.mount: Deactivated successfully. Feb 14 00:52:20.835419 containerd[1509]: time="2025-02-14T00:52:20.835352014Z" level=info msg="TearDown network for sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\" successfully" Feb 14 00:52:20.835643 containerd[1509]: time="2025-02-14T00:52:20.835613224Z" level=info msg="StopPodSandbox for \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\" returns successfully" Feb 14 00:52:20.840424 containerd[1509]: time="2025-02-14T00:52:20.840333012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rfjl,Uid:63e6501f-0b84-4dd0-abbf-bfa62e42e8b0,Namespace:calico-system,Attempt:1,}" Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.480 [INFO][3755] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.480 [INFO][3755] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" iface="eth0" netns="/var/run/netns/cni-ec38a744-6b26-d405-34f2-07fb91175c52" Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.482 [INFO][3755] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" iface="eth0" netns="/var/run/netns/cni-ec38a744-6b26-d405-34f2-07fb91175c52" Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.487 [INFO][3755] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" iface="eth0" netns="/var/run/netns/cni-ec38a744-6b26-d405-34f2-07fb91175c52" Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.487 [INFO][3755] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.487 [INFO][3755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.770 [INFO][3779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" HandleID="k8s-pod-network.69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.770 [INFO][3779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.801 [INFO][3779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.818 [WARNING][3779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" HandleID="k8s-pod-network.69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.818 [INFO][3779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" HandleID="k8s-pod-network.69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.823 [INFO][3779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:20.858090 containerd[1509]: 2025-02-14 00:52:20.841 [INFO][3755] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:20.863756 containerd[1509]: time="2025-02-14T00:52:20.863619672Z" level=info msg="TearDown network for sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\" successfully" Feb 14 00:52:20.866832 containerd[1509]: time="2025-02-14T00:52:20.866643829Z" level=info msg="StopPodSandbox for \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\" returns successfully" Feb 14 00:52:20.868234 systemd[1]: run-netns-cni\x2dec38a744\x2d6b26\x2dd405\x2d34f2\x2d07fb91175c52.mount: Deactivated successfully. Feb 14 00:52:20.872267 containerd[1509]: time="2025-02-14T00:52:20.870795190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc7c7db4-g66xr,Uid:c282c195-46d8-48e7-ac08-deb4910a1446,Namespace:calico-apiserver,Attempt:1,}" Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.485 [INFO][3753] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.488 [INFO][3753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" iface="eth0" netns="/var/run/netns/cni-16e9545d-51b9-0f79-db5c-72bb0e3211e1" Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.488 [INFO][3753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" iface="eth0" netns="/var/run/netns/cni-16e9545d-51b9-0f79-db5c-72bb0e3211e1" Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.490 [INFO][3753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" iface="eth0" netns="/var/run/netns/cni-16e9545d-51b9-0f79-db5c-72bb0e3211e1" Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.491 [INFO][3753] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.491 [INFO][3753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.764 [INFO][3780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" HandleID="k8s-pod-network.2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.772 [INFO][3780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.823 [INFO][3780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.861 [WARNING][3780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" HandleID="k8s-pod-network.2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.861 [INFO][3780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" HandleID="k8s-pod-network.2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.869 [INFO][3780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:20.913909 containerd[1509]: 2025-02-14 00:52:20.878 [INFO][3753] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:20.915126 containerd[1509]: time="2025-02-14T00:52:20.914015361Z" level=info msg="TearDown network for sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\" successfully" Feb 14 00:52:20.915126 containerd[1509]: time="2025-02-14T00:52:20.914090383Z" level=info msg="StopPodSandbox for \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\" returns successfully" Feb 14 00:52:20.918934 containerd[1509]: time="2025-02-14T00:52:20.918651876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sw62j,Uid:f57ee0d6-5cc3-4e98-9d2a-d8690b89184b,Namespace:kube-system,Attempt:1,}" Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.478 [INFO][3754] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.482 [INFO][3754] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" iface="eth0" netns="/var/run/netns/cni-5c98f53f-bebd-ff04-d103-7dec0ab1f1ff" Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.483 [INFO][3754] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" iface="eth0" netns="/var/run/netns/cni-5c98f53f-bebd-ff04-d103-7dec0ab1f1ff" Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.484 [INFO][3754] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" iface="eth0" netns="/var/run/netns/cni-5c98f53f-bebd-ff04-d103-7dec0ab1f1ff" Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.485 [INFO][3754] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.487 [INFO][3754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.781 [INFO][3778] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" HandleID="k8s-pod-network.bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.785 [INFO][3778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.870 [INFO][3778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.902 [WARNING][3778] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" HandleID="k8s-pod-network.bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.902 [INFO][3778] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" HandleID="k8s-pod-network.bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.911 [INFO][3778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:20.938044 containerd[1509]: 2025-02-14 00:52:20.925 [INFO][3754] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:20.941036 containerd[1509]: time="2025-02-14T00:52:20.938284238Z" level=info msg="TearDown network for sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\" successfully" Feb 14 00:52:20.941036 containerd[1509]: time="2025-02-14T00:52:20.938321944Z" level=info msg="StopPodSandbox for \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\" returns successfully" Feb 14 00:52:20.950113 containerd[1509]: time="2025-02-14T00:52:20.949711036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98789948-gtrxm,Uid:5a23e5d8-3212-4b6f-b57a-e861b760ed5a,Namespace:calico-system,Attempt:1,}" Feb 14 00:52:21.263191 systemd[1]: run-netns-cni\x2d5c98f53f\x2dbebd\x2dff04\x2dd103\x2d7dec0ab1f1ff.mount: Deactivated successfully. Feb 14 00:52:21.263363 systemd[1]: run-netns-cni\x2d16e9545d\x2d51b9\x2d0f79\x2ddb5c\x2d72bb0e3211e1.mount: Deactivated successfully. Feb 14 00:52:21.413634 systemd[1]: run-containerd-runc-k8s.io-ebb74de28933bad69675c5be45bd95494255189b6c94cc5587e166192e69f3ca-runc.Bdekd8.mount: Deactivated successfully. Feb 14 00:52:21.506515 systemd-networkd[1424]: cali036b59d302a: Link UP Feb 14 00:52:21.508460 systemd-networkd[1424]: cali036b59d302a: Gained carrier Feb 14 00:52:21.590942 systemd[1]: Started sshd@7-10.230.17.110:22-218.92.0.226:30252.service - OpenSSH per-connection server daemon (218.92.0.226:30252). Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.001 [INFO][3814] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.038 [INFO][3814] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0 csi-node-driver- calico-system 63e6501f-0b84-4dd0-abbf-bfa62e42e8b0 773 0 2025-02-14 00:51:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-2zttm.gb1.brightbox.com csi-node-driver-6rfjl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali036b59d302a [] []}} ContainerID="4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" Namespace="calico-system" Pod="csi-node-driver-6rfjl" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.038 [INFO][3814] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" Namespace="calico-system" Pod="csi-node-driver-6rfjl" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.347 [INFO][3880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" HandleID="k8s-pod-network.4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.383 [INFO][3880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" HandleID="k8s-pod-network.4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000407850), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-2zttm.gb1.brightbox.com", "pod":"csi-node-driver-6rfjl", "timestamp":"2025-02-14 00:52:21.347583028 +0000 UTC"}, Hostname:"srv-2zttm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.383 [INFO][3880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.383 [INFO][3880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.383 [INFO][3880] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2zttm.gb1.brightbox.com' Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.390 [INFO][3880] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.423 [INFO][3880] ipam/ipam.go 372: Looking up existing affinities for host host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.439 [INFO][3880] ipam/ipam.go 489: Trying affinity for 192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.443 [INFO][3880] ipam/ipam.go 155: Attempting to load block cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.447 [INFO][3880] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.448 [INFO][3880] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.451 [INFO][3880] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656 Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.459 [INFO][3880] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.470 [INFO][3880] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.96.65/26] block=192.168.96.64/26 handle="k8s-pod-network.4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.470 [INFO][3880] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.65/26] handle="k8s-pod-network.4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.470 [INFO][3880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:21.597119 containerd[1509]: 2025-02-14 00:52:21.470 [INFO][3880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.65/26] IPv6=[] ContainerID="4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" HandleID="k8s-pod-network.4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:21.598208 containerd[1509]: 2025-02-14 00:52:21.475 [INFO][3814] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" Namespace="calico-system" Pod="csi-node-driver-6rfjl" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63e6501f-0b84-4dd0-abbf-bfa62e42e8b0", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-6rfjl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali036b59d302a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:21.598208 containerd[1509]: 2025-02-14 00:52:21.475 [INFO][3814] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.96.65/32] ContainerID="4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" Namespace="calico-system" Pod="csi-node-driver-6rfjl" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:21.598208 containerd[1509]: 2025-02-14 00:52:21.475 [INFO][3814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali036b59d302a ContainerID="4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" Namespace="calico-system" Pod="csi-node-driver-6rfjl" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:21.598208 containerd[1509]: 2025-02-14 00:52:21.506 [INFO][3814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" Namespace="calico-system" Pod="csi-node-driver-6rfjl" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:21.598208 containerd[1509]: 2025-02-14 00:52:21.517 [INFO][3814] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" Namespace="calico-system" Pod="csi-node-driver-6rfjl" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63e6501f-0b84-4dd0-abbf-bfa62e42e8b0", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656", Pod:"csi-node-driver-6rfjl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali036b59d302a", MAC:"46:4c:51:7f:d8:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:21.598208 containerd[1509]: 2025-02-14 00:52:21.542 [INFO][3814] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656" Namespace="calico-system" Pod="csi-node-driver-6rfjl" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:21.721483 systemd-networkd[1424]: calid6494ae946f: Link UP Feb 14 00:52:21.722753 systemd-networkd[1424]: calid6494ae946f: Gained carrier Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.134 [INFO][3826] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.192 [INFO][3826] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0 calico-apiserver-77fc7c7db4- calico-apiserver c282c195-46d8-48e7-ac08-deb4910a1446 771 0 2025-02-14 00:51:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77fc7c7db4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-2zttm.gb1.brightbox.com calico-apiserver-77fc7c7db4-g66xr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid6494ae946f [] []}} ContainerID="6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-g66xr" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.192 [INFO][3826] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-g66xr" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.401 [INFO][3890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" HandleID="k8s-pod-network.6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.434 [INFO][3890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" HandleID="k8s-pod-network.6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e1850), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-2zttm.gb1.brightbox.com", "pod":"calico-apiserver-77fc7c7db4-g66xr", "timestamp":"2025-02-14 00:52:21.400995218 +0000 UTC"}, Hostname:"srv-2zttm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.434 [INFO][3890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.470 [INFO][3890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.470 [INFO][3890] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2zttm.gb1.brightbox.com' Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.501 [INFO][3890] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.527 [INFO][3890] ipam/ipam.go 372: Looking up existing affinities for host host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.556 [INFO][3890] ipam/ipam.go 489: Trying affinity for 192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.581 [INFO][3890] ipam/ipam.go 155: Attempting to load block cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.609 [INFO][3890] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.616 [INFO][3890] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.638 [INFO][3890] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196 Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.670 [INFO][3890] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.690 [INFO][3890] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.96.66/26] block=192.168.96.64/26 handle="k8s-pod-network.6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.692 [INFO][3890] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.66/26] handle="k8s-pod-network.6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.693 [INFO][3890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:21.754462 containerd[1509]: 2025-02-14 00:52:21.695 [INFO][3890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.66/26] IPv6=[] ContainerID="6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" HandleID="k8s-pod-network.6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:21.756237 containerd[1509]: 2025-02-14 00:52:21.713 [INFO][3826] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-g66xr" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0", GenerateName:"calico-apiserver-77fc7c7db4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c282c195-46d8-48e7-ac08-deb4910a1446", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc7c7db4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-77fc7c7db4-g66xr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid6494ae946f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:21.756237 containerd[1509]: 2025-02-14 00:52:21.713 [INFO][3826] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.96.66/32] ContainerID="6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-g66xr" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:21.756237 containerd[1509]: 2025-02-14 00:52:21.714 [INFO][3826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6494ae946f ContainerID="6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-g66xr" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:21.756237 containerd[1509]: 2025-02-14 00:52:21.723 [INFO][3826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-g66xr" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:21.756237 containerd[1509]: 2025-02-14 00:52:21.725 [INFO][3826] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-g66xr" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0", GenerateName:"calico-apiserver-77fc7c7db4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c282c195-46d8-48e7-ac08-deb4910a1446", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc7c7db4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196", Pod:"calico-apiserver-77fc7c7db4-g66xr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid6494ae946f", MAC:"96:61:b3:43:11:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:21.756237 containerd[1509]: 2025-02-14 00:52:21.747 [INFO][3826] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-g66xr" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:21.825589 containerd[1509]: time="2025-02-14T00:52:21.824082450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:52:21.825589 containerd[1509]: time="2025-02-14T00:52:21.824230855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:52:21.825589 containerd[1509]: time="2025-02-14T00:52:21.824252804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:21.827621 containerd[1509]: time="2025-02-14T00:52:21.826660859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:21.868215 systemd-networkd[1424]: calib65d5247ed5: Link UP Feb 14 00:52:21.876647 systemd-networkd[1424]: calib65d5247ed5: Gained carrier Feb 14 00:52:21.924674 systemd[1]: Started cri-containerd-4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656.scope - libcontainer container 4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656. Feb 14 00:52:21.938822 containerd[1509]: time="2025-02-14T00:52:21.938339431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:52:21.938822 containerd[1509]: time="2025-02-14T00:52:21.938468593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:52:21.938822 containerd[1509]: time="2025-02-14T00:52:21.938487713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:21.938822 containerd[1509]: time="2025-02-14T00:52:21.938689358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.082 [INFO][3841] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.135 [INFO][3841] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0 coredns-6f6b679f8f- kube-system f57ee0d6-5cc3-4e98-9d2a-d8690b89184b 774 0 2025-02-14 00:51:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-2zttm.gb1.brightbox.com coredns-6f6b679f8f-sw62j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib65d5247ed5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw62j" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.135 [INFO][3841] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw62j" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.398 [INFO][3888] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" HandleID="k8s-pod-network.7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.440 [INFO][3888] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" HandleID="k8s-pod-network.7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d780), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-2zttm.gb1.brightbox.com", "pod":"coredns-6f6b679f8f-sw62j", "timestamp":"2025-02-14 00:52:21.398023845 +0000 UTC"}, Hostname:"srv-2zttm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.440 [INFO][3888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.695 [INFO][3888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.697 [INFO][3888] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2zttm.gb1.brightbox.com' Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.708 [INFO][3888] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.733 [INFO][3888] ipam/ipam.go 372: Looking up existing affinities for host host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.764 [INFO][3888] ipam/ipam.go 489: Trying affinity for 192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.770 [INFO][3888] ipam/ipam.go 155: Attempting to load block cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.778 [INFO][3888] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.778 [INFO][3888] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.788 [INFO][3888] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69 Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.806 [INFO][3888] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.829 [INFO][3888] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.96.67/26] block=192.168.96.64/26 handle="k8s-pod-network.7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.830 [INFO][3888] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.67/26] handle="k8s-pod-network.7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.830 [INFO][3888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:21.944189 containerd[1509]: 2025-02-14 00:52:21.830 [INFO][3888] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.67/26] IPv6=[] ContainerID="7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" HandleID="k8s-pod-network.7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:21.947708 containerd[1509]: 2025-02-14 00:52:21.851 [INFO][3841] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw62j" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f57ee0d6-5cc3-4e98-9d2a-d8690b89184b", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"", Pod:"coredns-6f6b679f8f-sw62j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib65d5247ed5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:21.947708 containerd[1509]: 2025-02-14 00:52:21.852 [INFO][3841] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.96.67/32] ContainerID="7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw62j" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:21.947708 containerd[1509]: 2025-02-14 00:52:21.852 [INFO][3841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib65d5247ed5 ContainerID="7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw62j" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:21.947708 containerd[1509]: 2025-02-14 00:52:21.876 [INFO][3841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw62j" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:21.947708 containerd[1509]: 2025-02-14 00:52:21.885 [INFO][3841] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw62j" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f57ee0d6-5cc3-4e98-9d2a-d8690b89184b", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69", Pod:"coredns-6f6b679f8f-sw62j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib65d5247ed5", MAC:"72:18:13:f1:60:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:21.947708 containerd[1509]: 2025-02-14 00:52:21.931 [INFO][3841] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw62j" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:22.012272 systemd-networkd[1424]: cali348b7fd940b: Link UP Feb 14 00:52:22.024792 systemd-networkd[1424]: cali348b7fd940b: Gained carrier Feb 14 00:52:22.038967 systemd[1]: Started cri-containerd-6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196.scope - libcontainer container 6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196. Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.131 [INFO][3849] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.197 [INFO][3849] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0 calico-kube-controllers-98789948- calico-system 5a23e5d8-3212-4b6f-b57a-e861b760ed5a 772 0 2025-02-14 00:51:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:98789948 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-2zttm.gb1.brightbox.com calico-kube-controllers-98789948-gtrxm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali348b7fd940b [] []}} ContainerID="e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" Namespace="calico-system" Pod="calico-kube-controllers-98789948-gtrxm" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.200 [INFO][3849] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" Namespace="calico-system" Pod="calico-kube-controllers-98789948-gtrxm" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.422 [INFO][3898] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" HandleID="k8s-pod-network.e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.449 [INFO][3898] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" HandleID="k8s-pod-network.e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002939e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-2zttm.gb1.brightbox.com", "pod":"calico-kube-controllers-98789948-gtrxm", "timestamp":"2025-02-14 00:52:21.422937311 +0000 UTC"}, Hostname:"srv-2zttm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.449 [INFO][3898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.831 [INFO][3898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.831 [INFO][3898] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2zttm.gb1.brightbox.com' Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.841 [INFO][3898] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.862 [INFO][3898] ipam/ipam.go 372: Looking up existing affinities for host host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.891 [INFO][3898] ipam/ipam.go 489: Trying affinity for 192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.903 [INFO][3898] ipam/ipam.go 155: Attempting to load block cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.916 [INFO][3898] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.916 [INFO][3898] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.924 [INFO][3898] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01 Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.946 [INFO][3898] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.960 [INFO][3898] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.96.68/26] block=192.168.96.64/26 handle="k8s-pod-network.e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.960 [INFO][3898] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.68/26] handle="k8s-pod-network.e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.960 [INFO][3898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:22.107307 containerd[1509]: 2025-02-14 00:52:21.960 [INFO][3898] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.68/26] IPv6=[] ContainerID="e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" HandleID="k8s-pod-network.e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:22.108371 containerd[1509]: 2025-02-14 00:52:21.983 [INFO][3849] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" Namespace="calico-system" Pod="calico-kube-controllers-98789948-gtrxm" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0", GenerateName:"calico-kube-controllers-98789948-", Namespace:"calico-system", SelfLink:"", UID:"5a23e5d8-3212-4b6f-b57a-e861b760ed5a", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"98789948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-98789948-gtrxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali348b7fd940b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:22.108371 containerd[1509]: 2025-02-14 00:52:21.983 [INFO][3849] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.96.68/32] ContainerID="e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" Namespace="calico-system" Pod="calico-kube-controllers-98789948-gtrxm" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:22.108371 containerd[1509]: 2025-02-14 00:52:21.983 [INFO][3849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali348b7fd940b ContainerID="e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" Namespace="calico-system" Pod="calico-kube-controllers-98789948-gtrxm" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:22.108371 containerd[1509]: 2025-02-14 00:52:22.037 [INFO][3849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" Namespace="calico-system" Pod="calico-kube-controllers-98789948-gtrxm" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:22.108371 containerd[1509]: 2025-02-14 00:52:22.050 [INFO][3849] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" Namespace="calico-system" Pod="calico-kube-controllers-98789948-gtrxm" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0", GenerateName:"calico-kube-controllers-98789948-", Namespace:"calico-system", SelfLink:"", UID:"5a23e5d8-3212-4b6f-b57a-e861b760ed5a", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"98789948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01", Pod:"calico-kube-controllers-98789948-gtrxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali348b7fd940b", MAC:"fe:10:3b:0c:7f:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:22.108371 containerd[1509]: 2025-02-14 00:52:22.097 [INFO][3849] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01" Namespace="calico-system" Pod="calico-kube-controllers-98789948-gtrxm" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:22.127235 containerd[1509]: time="2025-02-14T00:52:22.125156884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:52:22.127345 containerd[1509]: time="2025-02-14T00:52:22.125279331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:52:22.127345 containerd[1509]: time="2025-02-14T00:52:22.125321559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:22.134940 containerd[1509]: time="2025-02-14T00:52:22.134852411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rfjl,Uid:63e6501f-0b84-4dd0-abbf-bfa62e42e8b0,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656\"" Feb 14 00:52:22.140156 containerd[1509]: time="2025-02-14T00:52:22.139590196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 14 00:52:22.144323 containerd[1509]: time="2025-02-14T00:52:22.143799319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:22.222191 containerd[1509]: time="2025-02-14T00:52:22.222127876Z" level=info msg="StopPodSandbox for \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\"" Feb 14 00:52:22.317266 containerd[1509]: time="2025-02-14T00:52:22.305269327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:52:22.317266 containerd[1509]: time="2025-02-14T00:52:22.305458128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:52:22.317266 containerd[1509]: time="2025-02-14T00:52:22.305480035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:22.317266 containerd[1509]: time="2025-02-14T00:52:22.306709997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:22.319879 systemd[1]: Started cri-containerd-7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69.scope - libcontainer container 7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69. Feb 14 00:52:22.400884 systemd[1]: Started cri-containerd-e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01.scope - libcontainer container e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01. Feb 14 00:52:22.539968 containerd[1509]: time="2025-02-14T00:52:22.539506214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc7c7db4-g66xr,Uid:c282c195-46d8-48e7-ac08-deb4910a1446,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196\"" Feb 14 00:52:22.546581 containerd[1509]: time="2025-02-14T00:52:22.545538267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sw62j,Uid:f57ee0d6-5cc3-4e98-9d2a-d8690b89184b,Namespace:kube-system,Attempt:1,} returns sandbox id \"7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69\"" Feb 14 00:52:22.573160 containerd[1509]: time="2025-02-14T00:52:22.572882638Z" level=info msg="CreateContainer within sandbox \"7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 00:52:22.613399 systemd-networkd[1424]: cali036b59d302a: Gained IPv6LL Feb 14 00:52:22.670647 containerd[1509]: time="2025-02-14T00:52:22.669984885Z" level=info msg="CreateContainer within sandbox \"7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0766bd4e2b95799ea2e0c1b9429bd351a857fda0d749c8cc9fac581aa24e1cb\"" Feb 14 00:52:22.673826 containerd[1509]: time="2025-02-14T00:52:22.673593973Z" level=info msg="StartContainer for \"e0766bd4e2b95799ea2e0c1b9429bd351a857fda0d749c8cc9fac581aa24e1cb\"" Feb 14 00:52:22.746632 systemd[1]: Started cri-containerd-e0766bd4e2b95799ea2e0c1b9429bd351a857fda0d749c8cc9fac581aa24e1cb.scope - libcontainer container e0766bd4e2b95799ea2e0c1b9429bd351a857fda0d749c8cc9fac581aa24e1cb. Feb 14 00:52:22.876277 containerd[1509]: time="2025-02-14T00:52:22.876193317Z" level=info msg="StartContainer for \"e0766bd4e2b95799ea2e0c1b9429bd351a857fda0d749c8cc9fac581aa24e1cb\" returns successfully" Feb 14 00:52:22.896033 containerd[1509]: time="2025-02-14T00:52:22.895742417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98789948-gtrxm,Uid:5a23e5d8-3212-4b6f-b57a-e861b760ed5a,Namespace:calico-system,Attempt:1,} returns sandbox id \"e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01\"" Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:22.624 [INFO][4182] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:22.624 [INFO][4182] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" iface="eth0" netns="/var/run/netns/cni-ac27d805-0dab-eca2-f367-4fc47aa36d15" Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:22.625 [INFO][4182] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" iface="eth0" netns="/var/run/netns/cni-ac27d805-0dab-eca2-f367-4fc47aa36d15" Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:22.627 [INFO][4182] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" iface="eth0" netns="/var/run/netns/cni-ac27d805-0dab-eca2-f367-4fc47aa36d15" Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:22.628 [INFO][4182] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:22.628 [INFO][4182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:22.915 [INFO][4238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" HandleID="k8s-pod-network.2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:22.917 [INFO][4238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:22.917 [INFO][4238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:23.001 [WARNING][4238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" HandleID="k8s-pod-network.2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:23.001 [INFO][4238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" HandleID="k8s-pod-network.2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:23.005 [INFO][4238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:23.012298 containerd[1509]: 2025-02-14 00:52:23.008 [INFO][4182] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:23.014064 containerd[1509]: time="2025-02-14T00:52:23.013077656Z" level=info msg="TearDown network for sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\" successfully" Feb 14 00:52:23.014064 containerd[1509]: time="2025-02-14T00:52:23.013155529Z" level=info msg="StopPodSandbox for \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\" returns successfully" Feb 14 00:52:23.016739 containerd[1509]: time="2025-02-14T00:52:23.016225077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5rrrv,Uid:421f9c58-7a2f-4ef5-8284-a6c9420a9ad4,Namespace:kube-system,Attempt:1,}" Feb 14 00:52:23.188767 systemd-networkd[1424]: calib65d5247ed5: Gained IPv6LL Feb 14 00:52:23.221157 containerd[1509]: time="2025-02-14T00:52:23.220972786Z" level=info msg="StopPodSandbox for \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\"" Feb 14 00:52:23.270530 systemd[1]: run-netns-cni\x2dac27d805\x2d0dab\x2deca2\x2df367\x2d4fc47aa36d15.mount: Deactivated successfully. Feb 14 00:52:23.414837 systemd-networkd[1424]: cali48250474692: Link UP Feb 14 00:52:23.418455 systemd-networkd[1424]: cali48250474692: Gained carrier Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.111 [INFO][4326] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.153 [INFO][4326] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0 coredns-6f6b679f8f- kube-system 421f9c58-7a2f-4ef5-8284-a6c9420a9ad4 805 0 2025-02-14 00:51:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-2zttm.gb1.brightbox.com coredns-6f6b679f8f-5rrrv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48250474692 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5rrrv" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.154 [INFO][4326] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5rrrv" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.266 [INFO][4342] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" HandleID="k8s-pod-network.9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.312 [INFO][4342] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" HandleID="k8s-pod-network.9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003189e0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-2zttm.gb1.brightbox.com", "pod":"coredns-6f6b679f8f-5rrrv", "timestamp":"2025-02-14 00:52:23.266337051 +0000 UTC"}, Hostname:"srv-2zttm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.312 [INFO][4342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.312 [INFO][4342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.312 [INFO][4342] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2zttm.gb1.brightbox.com' Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.319 [INFO][4342] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.327 [INFO][4342] ipam/ipam.go 372: Looking up existing affinities for host host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.339 [INFO][4342] ipam/ipam.go 489: Trying affinity for 192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.345 [INFO][4342] ipam/ipam.go 155: Attempting to load block cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.349 [INFO][4342] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.350 [INFO][4342] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.354 [INFO][4342] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8 Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.371 [INFO][4342] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.391 [INFO][4342] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.96.69/26] block=192.168.96.64/26 handle="k8s-pod-network.9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.391 [INFO][4342] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.69/26] handle="k8s-pod-network.9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.391 [INFO][4342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:23.465093 containerd[1509]: 2025-02-14 00:52:23.391 [INFO][4342] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.69/26] IPv6=[] ContainerID="9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" HandleID="k8s-pod-network.9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:23.467680 containerd[1509]: 2025-02-14 00:52:23.396 [INFO][4326] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5rrrv" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"421f9c58-7a2f-4ef5-8284-a6c9420a9ad4", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"", Pod:"coredns-6f6b679f8f-5rrrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48250474692", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:23.467680 containerd[1509]: 2025-02-14 00:52:23.396 [INFO][4326] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.96.69/32] ContainerID="9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5rrrv" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:23.467680 containerd[1509]: 2025-02-14 00:52:23.396 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48250474692 ContainerID="9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5rrrv" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:23.467680 containerd[1509]: 2025-02-14 00:52:23.422 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5rrrv" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:23.467680 containerd[1509]: 2025-02-14 00:52:23.423 [INFO][4326] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5rrrv" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"421f9c58-7a2f-4ef5-8284-a6c9420a9ad4", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8", Pod:"coredns-6f6b679f8f-5rrrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48250474692", MAC:"c6:a6:72:ff:13:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:23.467680 containerd[1509]: 2025-02-14 00:52:23.452 [INFO][4326] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5rrrv" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:23.571734 systemd-networkd[1424]: calid6494ae946f: Gained IPv6LL Feb 14 00:52:23.627722 containerd[1509]: time="2025-02-14T00:52:23.627457726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:52:23.629080 containerd[1509]: time="2025-02-14T00:52:23.628439884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:52:23.629080 containerd[1509]: time="2025-02-14T00:52:23.628956839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:23.635428 containerd[1509]: time="2025-02-14T00:52:23.635188800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:23.712931 kubelet[2641]: I0214 00:52:23.712259 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-sw62j" podStartSLOduration=38.712209381 podStartE2EDuration="38.712209381s" podCreationTimestamp="2025-02-14 00:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:52:23.690640782 +0000 UTC m=+42.680807687" watchObservedRunningTime="2025-02-14 00:52:23.712209381 +0000 UTC m=+42.702376269" Feb 14 00:52:23.766893 systemd[1]: Started cri-containerd-9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8.scope - libcontainer container 9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8. Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.458 [INFO][4361] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.460 [INFO][4361] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" iface="eth0" netns="/var/run/netns/cni-ec33924b-9a81-29ad-ccb1-9d2f11773488" Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.461 [INFO][4361] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" iface="eth0" netns="/var/run/netns/cni-ec33924b-9a81-29ad-ccb1-9d2f11773488" Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.462 [INFO][4361] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" iface="eth0" netns="/var/run/netns/cni-ec33924b-9a81-29ad-ccb1-9d2f11773488" Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.462 [INFO][4361] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.463 [INFO][4361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.671 [INFO][4372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" HandleID="k8s-pod-network.e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.671 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.674 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.714 [WARNING][4372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" HandleID="k8s-pod-network.e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.714 [INFO][4372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" HandleID="k8s-pod-network.e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.721 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:23.770587 containerd[1509]: 2025-02-14 00:52:23.752 [INFO][4361] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:23.774082 containerd[1509]: time="2025-02-14T00:52:23.773912280Z" level=info msg="TearDown network for sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\" successfully" Feb 14 00:52:23.774082 containerd[1509]: time="2025-02-14T00:52:23.773991925Z" level=info msg="StopPodSandbox for \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\" returns successfully" Feb 14 00:52:23.775803 containerd[1509]: time="2025-02-14T00:52:23.775316859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc7c7db4-nz6x9,Uid:899cb3f5-312b-45b1-b391-beaf3be22e8b,Namespace:calico-apiserver,Attempt:1,}" Feb 14 00:52:23.784109 systemd[1]: run-netns-cni\x2dec33924b\x2d9a81\x2d29ad\x2dccb1\x2d9d2f11773488.mount: Deactivated successfully. Feb 14 00:52:23.806141 sshd[4395]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.226 user=root Feb 14 00:52:23.826879 systemd-networkd[1424]: cali348b7fd940b: Gained IPv6LL Feb 14 00:52:24.048427 containerd[1509]: time="2025-02-14T00:52:24.048111726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5rrrv,Uid:421f9c58-7a2f-4ef5-8284-a6c9420a9ad4,Namespace:kube-system,Attempt:1,} returns sandbox id \"9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8\"" Feb 14 00:52:24.060406 containerd[1509]: time="2025-02-14T00:52:24.059694715Z" level=info msg="CreateContainer within sandbox \"9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 00:52:24.128377 containerd[1509]: time="2025-02-14T00:52:24.128220510Z" level=info msg="CreateContainer within sandbox \"9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55758f92e7f12fc44c1181e41b31cf591b7a4be49a2890596176a31c5bbcd4cc\"" Feb 14 00:52:24.137158 containerd[1509]: time="2025-02-14T00:52:24.132929505Z" level=info msg="StartContainer for \"55758f92e7f12fc44c1181e41b31cf591b7a4be49a2890596176a31c5bbcd4cc\"" Feb 14 00:52:24.241808 systemd-networkd[1424]: cali312789ec643: Link UP Feb 14 00:52:24.246570 systemd-networkd[1424]: cali312789ec643: Gained carrier Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:23.874 [INFO][4426] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:23.918 [INFO][4426] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0 calico-apiserver-77fc7c7db4- calico-apiserver 899cb3f5-312b-45b1-b391-beaf3be22e8b 815 0 2025-02-14 00:51:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77fc7c7db4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-2zttm.gb1.brightbox.com calico-apiserver-77fc7c7db4-nz6x9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali312789ec643 [] []}} ContainerID="548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-nz6x9" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:23.919 [INFO][4426] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-nz6x9" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:23.994 [INFO][4439] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" HandleID="k8s-pod-network.548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.026 [INFO][4439] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" HandleID="k8s-pod-network.548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042dd00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-2zttm.gb1.brightbox.com", "pod":"calico-apiserver-77fc7c7db4-nz6x9", "timestamp":"2025-02-14 00:52:23.994221571 +0000 UTC"}, Hostname:"srv-2zttm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.027 [INFO][4439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.027 [INFO][4439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.027 [INFO][4439] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2zttm.gb1.brightbox.com' Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.041 [INFO][4439] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.116 [INFO][4439] ipam/ipam.go 372: Looking up existing affinities for host host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.155 [INFO][4439] ipam/ipam.go 489: Trying affinity for 192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.170 [INFO][4439] ipam/ipam.go 155: Attempting to load block cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.175 [INFO][4439] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.175 [INFO][4439] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.179 [INFO][4439] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.201 [INFO][4439] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.222 [INFO][4439] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.96.70/26] block=192.168.96.64/26 handle="k8s-pod-network.548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.222 [INFO][4439] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.70/26] handle="k8s-pod-network.548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" host="srv-2zttm.gb1.brightbox.com" Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.223 [INFO][4439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:24.279129 containerd[1509]: 2025-02-14 00:52:24.223 [INFO][4439] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.70/26] IPv6=[] ContainerID="548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" HandleID="k8s-pod-network.548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:24.280369 containerd[1509]: 2025-02-14 00:52:24.233 [INFO][4426] cni-plugin/k8s.go 386: Populated endpoint ContainerID="548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-nz6x9" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0", GenerateName:"calico-apiserver-77fc7c7db4-", Namespace:"calico-apiserver", SelfLink:"", UID:"899cb3f5-312b-45b1-b391-beaf3be22e8b", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc7c7db4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-77fc7c7db4-nz6x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali312789ec643", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:24.280369 containerd[1509]: 2025-02-14 00:52:24.234 [INFO][4426] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.96.70/32] ContainerID="548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-nz6x9" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:24.280369 containerd[1509]: 2025-02-14 00:52:24.234 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali312789ec643 ContainerID="548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-nz6x9" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:24.280369 containerd[1509]: 2025-02-14 00:52:24.247 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-nz6x9" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:24.280369 containerd[1509]: 2025-02-14 00:52:24.248 [INFO][4426] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-nz6x9" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0", GenerateName:"calico-apiserver-77fc7c7db4-", Namespace:"calico-apiserver", SelfLink:"", UID:"899cb3f5-312b-45b1-b391-beaf3be22e8b", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc7c7db4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e", Pod:"calico-apiserver-77fc7c7db4-nz6x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali312789ec643", MAC:"7e:53:4f:57:c6:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:24.280369 containerd[1509]: 2025-02-14 00:52:24.263 [INFO][4426] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e" Namespace="calico-apiserver" Pod="calico-apiserver-77fc7c7db4-nz6x9" WorkloadEndpoint="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:24.328746 systemd[1]: Started cri-containerd-55758f92e7f12fc44c1181e41b31cf591b7a4be49a2890596176a31c5bbcd4cc.scope - libcontainer container 55758f92e7f12fc44c1181e41b31cf591b7a4be49a2890596176a31c5bbcd4cc. Feb 14 00:52:24.460276 containerd[1509]: time="2025-02-14T00:52:24.459210273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:52:24.460276 containerd[1509]: time="2025-02-14T00:52:24.459346927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:52:24.460276 containerd[1509]: time="2025-02-14T00:52:24.459372680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:24.460276 containerd[1509]: time="2025-02-14T00:52:24.459576293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:52:24.507447 containerd[1509]: time="2025-02-14T00:52:24.506916373Z" level=info msg="StartContainer for \"55758f92e7f12fc44c1181e41b31cf591b7a4be49a2890596176a31c5bbcd4cc\" returns successfully" Feb 14 00:52:24.592673 systemd[1]: Started cri-containerd-548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e.scope - libcontainer container 548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e. Feb 14 00:52:24.703373 kubelet[2641]: I0214 00:52:24.703292 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5rrrv" podStartSLOduration=39.703266034 podStartE2EDuration="39.703266034s" podCreationTimestamp="2025-02-14 00:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:52:24.702998823 +0000 UTC m=+43.693165727" watchObservedRunningTime="2025-02-14 00:52:24.703266034 +0000 UTC m=+43.693432922" Feb 14 00:52:24.832068 containerd[1509]: time="2025-02-14T00:52:24.831874700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:24.834449 containerd[1509]: time="2025-02-14T00:52:24.834353948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 14 00:52:24.836195 containerd[1509]: time="2025-02-14T00:52:24.836127826Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:24.842225 containerd[1509]: time="2025-02-14T00:52:24.842137074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:24.844868 containerd[1509]: time="2025-02-14T00:52:24.844095154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.704452855s" Feb 14 00:52:24.844868 containerd[1509]: time="2025-02-14T00:52:24.844187317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 14 00:52:24.847675 containerd[1509]: time="2025-02-14T00:52:24.847636617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 14 00:52:24.852794 containerd[1509]: time="2025-02-14T00:52:24.852540007Z" level=info msg="CreateContainer within sandbox \"4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 14 00:52:24.888106 containerd[1509]: time="2025-02-14T00:52:24.888044799Z" level=info msg="CreateContainer within sandbox \"4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"539540c16d614e7a3ab579a07dd379a6c9633afe9b6e1fc88fed69c756caaf13\"" Feb 14 00:52:24.892592 containerd[1509]: time="2025-02-14T00:52:24.890045960Z" level=info msg="StartContainer for \"539540c16d614e7a3ab579a07dd379a6c9633afe9b6e1fc88fed69c756caaf13\"" Feb 14 00:52:24.956532 kernel: bpftool[4580]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 14 00:52:24.982696 containerd[1509]: time="2025-02-14T00:52:24.982601018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fc7c7db4-nz6x9,Uid:899cb3f5-312b-45b1-b391-beaf3be22e8b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e\"" Feb 14 00:52:24.997674 systemd[1]: Started cri-containerd-539540c16d614e7a3ab579a07dd379a6c9633afe9b6e1fc88fed69c756caaf13.scope - libcontainer container 539540c16d614e7a3ab579a07dd379a6c9633afe9b6e1fc88fed69c756caaf13. Feb 14 00:52:25.111788 containerd[1509]: time="2025-02-14T00:52:25.111120150Z" level=info msg="StartContainer for \"539540c16d614e7a3ab579a07dd379a6c9633afe9b6e1fc88fed69c756caaf13\" returns successfully" Feb 14 00:52:25.298577 systemd-networkd[1424]: cali48250474692: Gained IPv6LL Feb 14 00:52:25.464680 systemd-networkd[1424]: vxlan.calico: Link UP Feb 14 00:52:25.464692 systemd-networkd[1424]: vxlan.calico: Gained carrier Feb 14 00:52:25.683523 systemd-networkd[1424]: cali312789ec643: Gained IPv6LL Feb 14 00:52:25.899370 sshd[3937]: PAM: Permission denied for root from 218.92.0.226 Feb 14 00:52:26.336968 sshd[4703]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.226 user=root Feb 14 00:52:27.218668 systemd-networkd[1424]: vxlan.calico: Gained IPv6LL Feb 14 00:52:28.173965 sshd[3937]: PAM: Permission denied for root from 218.92.0.226 Feb 14 00:52:28.610584 sshd[4711]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.226 user=root Feb 14 00:52:28.612212 containerd[1509]: time="2025-02-14T00:52:28.611596001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:28.613227 containerd[1509]: time="2025-02-14T00:52:28.613159493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 14 00:52:28.614008 containerd[1509]: time="2025-02-14T00:52:28.613738973Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:28.617731 containerd[1509]: time="2025-02-14T00:52:28.617209619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:28.620098 containerd[1509]: time="2025-02-14T00:52:28.620044712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.772358244s" Feb 14 00:52:28.620254 containerd[1509]: time="2025-02-14T00:52:28.620224497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 14 00:52:28.621862 containerd[1509]: time="2025-02-14T00:52:28.621832902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 14 00:52:28.626931 containerd[1509]: time="2025-02-14T00:52:28.626575335Z" level=info msg="CreateContainer within sandbox \"6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 14 00:52:28.644716 containerd[1509]: time="2025-02-14T00:52:28.644676066Z" level=info msg="CreateContainer within sandbox \"6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8cf06b6e761dd9e5fdde8f794730883305faf0565d3ae37be697dd3c4482e08d\"" Feb 14 00:52:28.646049 containerd[1509]: time="2025-02-14T00:52:28.645977648Z" level=info msg="StartContainer for \"8cf06b6e761dd9e5fdde8f794730883305faf0565d3ae37be697dd3c4482e08d\"" Feb 14 00:52:28.698633 systemd[1]: Started cri-containerd-8cf06b6e761dd9e5fdde8f794730883305faf0565d3ae37be697dd3c4482e08d.scope - libcontainer container 8cf06b6e761dd9e5fdde8f794730883305faf0565d3ae37be697dd3c4482e08d. Feb 14 00:52:28.771474 containerd[1509]: time="2025-02-14T00:52:28.771369181Z" level=info msg="StartContainer for \"8cf06b6e761dd9e5fdde8f794730883305faf0565d3ae37be697dd3c4482e08d\" returns successfully" Feb 14 00:52:29.734207 kubelet[2641]: I0214 00:52:29.734115 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77fc7c7db4-g66xr" podStartSLOduration=31.660016361 podStartE2EDuration="37.734077932s" podCreationTimestamp="2025-02-14 00:51:52 +0000 UTC" firstStartedPulling="2025-02-14 00:52:22.547243558 +0000 UTC m=+41.537410437" lastFinishedPulling="2025-02-14 00:52:28.621305107 +0000 UTC m=+47.611472008" observedRunningTime="2025-02-14 00:52:29.733874065 +0000 UTC m=+48.724040968" watchObservedRunningTime="2025-02-14 00:52:29.734077932 +0000 UTC m=+48.724244819" Feb 14 00:52:30.724196 sshd[3937]: PAM: Permission denied for root from 218.92.0.226 Feb 14 00:52:30.941583 sshd[3937]: Received disconnect from 218.92.0.226 port 30252:11: [preauth] Feb 14 00:52:30.941583 sshd[3937]: Disconnected from authenticating user root 218.92.0.226 port 30252 [preauth] Feb 14 00:52:30.945553 systemd[1]: sshd@7-10.230.17.110:22-218.92.0.226:30252.service: Deactivated successfully. Feb 14 00:52:31.213438 systemd[1]: Started sshd@8-10.230.17.110:22-218.92.0.226:17206.service - OpenSSH per-connection server daemon (218.92.0.226:17206). Feb 14 00:52:31.846004 containerd[1509]: time="2025-02-14T00:52:31.845941392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:31.847272 containerd[1509]: time="2025-02-14T00:52:31.847024277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 14 00:52:31.848336 containerd[1509]: time="2025-02-14T00:52:31.848116236Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:31.856158 containerd[1509]: time="2025-02-14T00:52:31.855488594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:31.856549 containerd[1509]: time="2025-02-14T00:52:31.856418513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.233959274s" Feb 14 00:52:31.856775 containerd[1509]: time="2025-02-14T00:52:31.856655898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 14 00:52:31.859000 containerd[1509]: time="2025-02-14T00:52:31.858872584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 14 00:52:31.880334 containerd[1509]: time="2025-02-14T00:52:31.880254023Z" level=info msg="CreateContainer within sandbox \"e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 14 00:52:31.903409 containerd[1509]: time="2025-02-14T00:52:31.899929089Z" level=info msg="CreateContainer within sandbox \"e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4b015b1b2753139f8d3e0ad84069f7c74dede7db769de25311cd4bf04be5f9b9\"" Feb 14 00:52:31.904931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount834303310.mount: Deactivated successfully. Feb 14 00:52:31.909373 containerd[1509]: time="2025-02-14T00:52:31.909281769Z" level=info msg="StartContainer for \"4b015b1b2753139f8d3e0ad84069f7c74dede7db769de25311cd4bf04be5f9b9\"" Feb 14 00:52:31.964524 systemd[1]: Started cri-containerd-4b015b1b2753139f8d3e0ad84069f7c74dede7db769de25311cd4bf04be5f9b9.scope - libcontainer container 4b015b1b2753139f8d3e0ad84069f7c74dede7db769de25311cd4bf04be5f9b9. Feb 14 00:52:32.035260 containerd[1509]: time="2025-02-14T00:52:32.035192217Z" level=info msg="StartContainer for \"4b015b1b2753139f8d3e0ad84069f7c74dede7db769de25311cd4bf04be5f9b9\" returns successfully" Feb 14 00:52:32.262691 containerd[1509]: time="2025-02-14T00:52:32.256836270Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:32.262691 containerd[1509]: time="2025-02-14T00:52:32.258232824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 14 00:52:32.264592 containerd[1509]: time="2025-02-14T00:52:32.264521789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 404.119145ms" Feb 14 00:52:32.264882 containerd[1509]: time="2025-02-14T00:52:32.264728066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 14 00:52:32.266842 containerd[1509]: time="2025-02-14T00:52:32.266222283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 14 00:52:32.269779 containerd[1509]: time="2025-02-14T00:52:32.268899606Z" level=info msg="CreateContainer within sandbox \"548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 14 00:52:32.282163 containerd[1509]: time="2025-02-14T00:52:32.282108673Z" level=info msg="CreateContainer within sandbox \"548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6b76088520eeabc24d3ab64a8b88bf6a30048dd4849ca8d69e1392955c6d5f7d\"" Feb 14 00:52:32.283113 containerd[1509]: time="2025-02-14T00:52:32.283084069Z" level=info msg="StartContainer for \"6b76088520eeabc24d3ab64a8b88bf6a30048dd4849ca8d69e1392955c6d5f7d\"" Feb 14 00:52:32.329618 systemd[1]: Started cri-containerd-6b76088520eeabc24d3ab64a8b88bf6a30048dd4849ca8d69e1392955c6d5f7d.scope - libcontainer container 6b76088520eeabc24d3ab64a8b88bf6a30048dd4849ca8d69e1392955c6d5f7d. Feb 14 00:52:32.400455 containerd[1509]: time="2025-02-14T00:52:32.400376756Z" level=info msg="StartContainer for \"6b76088520eeabc24d3ab64a8b88bf6a30048dd4849ca8d69e1392955c6d5f7d\" returns successfully" Feb 14 00:52:32.833938 kubelet[2641]: I0214 00:52:32.833799 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77fc7c7db4-nz6x9" podStartSLOduration=33.557792665 podStartE2EDuration="40.833752487s" podCreationTimestamp="2025-02-14 00:51:52 +0000 UTC" firstStartedPulling="2025-02-14 00:52:24.989938339 +0000 UTC m=+43.980105216" lastFinishedPulling="2025-02-14 00:52:32.265898158 +0000 UTC m=+51.256065038" observedRunningTime="2025-02-14 00:52:32.81666421 +0000 UTC m=+51.806831104" watchObservedRunningTime="2025-02-14 00:52:32.833752487 +0000 UTC m=+51.823919384" Feb 14 00:52:32.835562 kubelet[2641]: I0214 00:52:32.835167 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-98789948-gtrxm" podStartSLOduration=30.878401184 podStartE2EDuration="39.835028852s" podCreationTimestamp="2025-02-14 00:51:53 +0000 UTC" firstStartedPulling="2025-02-14 00:52:22.901702038 +0000 UTC m=+41.891868943" lastFinishedPulling="2025-02-14 00:52:31.858329731 +0000 UTC m=+50.848496611" observedRunningTime="2025-02-14 00:52:32.83172092 +0000 UTC m=+51.821887830" watchObservedRunningTime="2025-02-14 00:52:32.835028852 +0000 UTC m=+51.825195735" Feb 14 00:52:32.941561 sshd[4858]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.226 user=root Feb 14 00:52:33.759032 kubelet[2641]: I0214 00:52:33.758786 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 00:52:34.419113 containerd[1509]: time="2025-02-14T00:52:34.418952206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:34.426203 containerd[1509]: time="2025-02-14T00:52:34.424976543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 14 00:52:34.428367 containerd[1509]: time="2025-02-14T00:52:34.428291354Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:34.442046 containerd[1509]: time="2025-02-14T00:52:34.441934188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.175664221s" Feb 14 00:52:34.442046 containerd[1509]: time="2025-02-14T00:52:34.442013091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 14 00:52:34.442919 containerd[1509]: time="2025-02-14T00:52:34.442201229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:52:34.446688 containerd[1509]: time="2025-02-14T00:52:34.446599117Z" level=info msg="CreateContainer within sandbox \"4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 14 00:52:34.483921 containerd[1509]: time="2025-02-14T00:52:34.483725474Z" level=info msg="CreateContainer within sandbox \"4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9d8283e78152f401549381b3069aaba9124dbd489b7e5624c05ed0b59914dc29\"" Feb 14 00:52:34.485641 containerd[1509]: time="2025-02-14T00:52:34.485494373Z" level=info msg="StartContainer for \"9d8283e78152f401549381b3069aaba9124dbd489b7e5624c05ed0b59914dc29\"" Feb 14 00:52:34.549862 systemd[1]: Started cri-containerd-9d8283e78152f401549381b3069aaba9124dbd489b7e5624c05ed0b59914dc29.scope - libcontainer container 9d8283e78152f401549381b3069aaba9124dbd489b7e5624c05ed0b59914dc29. Feb 14 00:52:34.595044 containerd[1509]: time="2025-02-14T00:52:34.594257634Z" level=info msg="StartContainer for \"9d8283e78152f401549381b3069aaba9124dbd489b7e5624c05ed0b59914dc29\" returns successfully" Feb 14 00:52:34.603177 sshd[4773]: PAM: Permission denied for root from 218.92.0.226 Feb 14 00:52:34.780236 kubelet[2641]: I0214 00:52:34.780000 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6rfjl" podStartSLOduration=29.473718205 podStartE2EDuration="41.779973483s" podCreationTimestamp="2025-02-14 00:51:53 +0000 UTC" firstStartedPulling="2025-02-14 00:52:22.138316129 +0000 UTC m=+41.128483003" lastFinishedPulling="2025-02-14 00:52:34.444571406 +0000 UTC m=+53.434738281" observedRunningTime="2025-02-14 00:52:34.777175709 +0000 UTC m=+53.767342614" watchObservedRunningTime="2025-02-14 00:52:34.779973483 +0000 UTC m=+53.770140365" Feb 14 00:52:35.063884 sshd[4941]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.226 user=root Feb 14 00:52:35.521258 kubelet[2641]: I0214 00:52:35.521053 2641 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 14 00:52:35.522226 kubelet[2641]: I0214 00:52:35.522162 2641 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 14 00:52:36.664993 sshd[4773]: PAM: Permission denied for root from 218.92.0.226 Feb 14 00:52:37.125850 sshd[4950]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.226 user=root Feb 14 00:52:39.003416 sshd[4773]: PAM: Permission denied for root from 218.92.0.226 Feb 14 00:52:39.233475 sshd[4773]: Received disconnect from 218.92.0.226 port 17206:11: [preauth] Feb 14 00:52:39.235978 sshd[4773]: Disconnected from authenticating user root 218.92.0.226 port 17206 [preauth] Feb 14 00:52:39.238035 systemd[1]: sshd@8-10.230.17.110:22-218.92.0.226:17206.service: Deactivated successfully. Feb 14 00:52:39.429795 systemd[1]: Started sshd@9-10.230.17.110:22-218.92.0.226:17210.service - OpenSSH per-connection server daemon (218.92.0.226:17210). Feb 14 00:52:40.858866 sshd[4959]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.226 user=root Feb 14 00:52:41.258478 containerd[1509]: time="2025-02-14T00:52:41.258176592Z" level=info msg="StopPodSandbox for \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\"" Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.428 [WARNING][4974] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0", GenerateName:"calico-apiserver-77fc7c7db4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c282c195-46d8-48e7-ac08-deb4910a1446", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc7c7db4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196", Pod:"calico-apiserver-77fc7c7db4-g66xr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid6494ae946f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.431 [INFO][4974] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.431 [INFO][4974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" iface="eth0" netns="" Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.431 [INFO][4974] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.431 [INFO][4974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.471 [INFO][4980] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" HandleID="k8s-pod-network.69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.471 [INFO][4980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.471 [INFO][4980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.481 [WARNING][4980] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" HandleID="k8s-pod-network.69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.481 [INFO][4980] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" HandleID="k8s-pod-network.69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.483 [INFO][4980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:41.489037 containerd[1509]: 2025-02-14 00:52:41.486 [INFO][4974] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:41.491645 containerd[1509]: time="2025-02-14T00:52:41.489075226Z" level=info msg="TearDown network for sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\" successfully" Feb 14 00:52:41.491645 containerd[1509]: time="2025-02-14T00:52:41.489114388Z" level=info msg="StopPodSandbox for \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\" returns successfully" Feb 14 00:52:41.529738 containerd[1509]: time="2025-02-14T00:52:41.529507199Z" level=info msg="RemovePodSandbox for \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\"" Feb 14 00:52:41.529738 containerd[1509]: time="2025-02-14T00:52:41.529593876Z" level=info msg="Forcibly stopping sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\"" Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.591 [WARNING][4999] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0", GenerateName:"calico-apiserver-77fc7c7db4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c282c195-46d8-48e7-ac08-deb4910a1446", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc7c7db4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"6206f7be75c75d9a1602deaf5db09b55b640db511be61aff05d18f3b50fb1196", Pod:"calico-apiserver-77fc7c7db4-g66xr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid6494ae946f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.591 [INFO][4999] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.591 [INFO][4999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" iface="eth0" netns="" Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.591 [INFO][4999] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.591 [INFO][4999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.631 [INFO][5005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" HandleID="k8s-pod-network.69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.631 [INFO][5005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.632 [INFO][5005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.640 [WARNING][5005] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" HandleID="k8s-pod-network.69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.640 [INFO][5005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" HandleID="k8s-pod-network.69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--g66xr-eth0" Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.642 [INFO][5005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:41.647414 containerd[1509]: 2025-02-14 00:52:41.644 [INFO][4999] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c" Feb 14 00:52:41.649011 containerd[1509]: time="2025-02-14T00:52:41.647490057Z" level=info msg="TearDown network for sandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\" successfully" Feb 14 00:52:41.651588 containerd[1509]: time="2025-02-14T00:52:41.651519439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:52:41.660826 containerd[1509]: time="2025-02-14T00:52:41.660776489Z" level=info msg="RemovePodSandbox \"69773d1ce5cc62c3a624b3152d365ec1cc8400e6ee5a1e2fe8cff9fb750b148c\" returns successfully" Feb 14 00:52:41.661920 containerd[1509]: time="2025-02-14T00:52:41.661887084Z" level=info msg="StopPodSandbox for \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\"" Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.720 [WARNING][5023] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"421f9c58-7a2f-4ef5-8284-a6c9420a9ad4", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8", Pod:"coredns-6f6b679f8f-5rrrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48250474692", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.721 [INFO][5023] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.721 [INFO][5023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" iface="eth0" netns="" Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.721 [INFO][5023] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.721 [INFO][5023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.760 [INFO][5029] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" HandleID="k8s-pod-network.2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.761 [INFO][5029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.761 [INFO][5029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.770 [WARNING][5029] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" HandleID="k8s-pod-network.2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.770 [INFO][5029] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" HandleID="k8s-pod-network.2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.772 [INFO][5029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:41.777751 containerd[1509]: 2025-02-14 00:52:41.774 [INFO][5023] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:41.780513 containerd[1509]: time="2025-02-14T00:52:41.778041265Z" level=info msg="TearDown network for sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\" successfully" Feb 14 00:52:41.780513 containerd[1509]: time="2025-02-14T00:52:41.778096229Z" level=info msg="StopPodSandbox for \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\" returns successfully" Feb 14 00:52:41.780513 containerd[1509]: time="2025-02-14T00:52:41.779152077Z" level=info msg="RemovePodSandbox for \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\"" Feb 14 00:52:41.780513 containerd[1509]: time="2025-02-14T00:52:41.779201616Z" level=info msg="Forcibly stopping sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\"" Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.844 [WARNING][5047] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"421f9c58-7a2f-4ef5-8284-a6c9420a9ad4", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"9afca6e2d2f779f8e573257356f4034abf4509daae936c6a89290a95f088d6d8", Pod:"coredns-6f6b679f8f-5rrrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48250474692", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.845 [INFO][5047] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.845 [INFO][5047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" iface="eth0" netns="" Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.845 [INFO][5047] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.845 [INFO][5047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.896 [INFO][5053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" HandleID="k8s-pod-network.2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.896 [INFO][5053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.896 [INFO][5053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.907 [WARNING][5053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" HandleID="k8s-pod-network.2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.907 [INFO][5053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" HandleID="k8s-pod-network.2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5rrrv-eth0" Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.909 [INFO][5053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:41.917447 containerd[1509]: 2025-02-14 00:52:41.914 [INFO][5047] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb" Feb 14 00:52:41.917447 containerd[1509]: time="2025-02-14T00:52:41.917113413Z" level=info msg="TearDown network for sandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\" successfully" Feb 14 00:52:41.928620 containerd[1509]: time="2025-02-14T00:52:41.928549469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:52:41.928738 containerd[1509]: time="2025-02-14T00:52:41.928697054Z" level=info msg="RemovePodSandbox \"2f2c544c4c1bdf01d134c243fe46e6b85c88f819dd45300d9cddd6bcd1fc6fdb\" returns successfully" Feb 14 00:52:41.929634 containerd[1509]: time="2025-02-14T00:52:41.929592380Z" level=info msg="StopPodSandbox for \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\"" Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:41.987 [WARNING][5072] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0", GenerateName:"calico-kube-controllers-98789948-", Namespace:"calico-system", SelfLink:"", UID:"5a23e5d8-3212-4b6f-b57a-e861b760ed5a", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"98789948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01", Pod:"calico-kube-controllers-98789948-gtrxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali348b7fd940b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:41.988 [INFO][5072] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:41.988 [INFO][5072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" iface="eth0" netns="" Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:41.988 [INFO][5072] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:41.988 [INFO][5072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:42.022 [INFO][5078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" HandleID="k8s-pod-network.bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:42.022 [INFO][5078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:42.022 [INFO][5078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:42.031 [WARNING][5078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" HandleID="k8s-pod-network.bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:42.032 [INFO][5078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" HandleID="k8s-pod-network.bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:42.034 [INFO][5078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:42.038483 containerd[1509]: 2025-02-14 00:52:42.036 [INFO][5072] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:42.038483 containerd[1509]: time="2025-02-14T00:52:42.038427019Z" level=info msg="TearDown network for sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\" successfully" Feb 14 00:52:42.038483 containerd[1509]: time="2025-02-14T00:52:42.038467435Z" level=info msg="StopPodSandbox for \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\" returns successfully" Feb 14 00:52:42.041075 containerd[1509]: time="2025-02-14T00:52:42.040052483Z" level=info msg="RemovePodSandbox for \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\"" Feb 14 00:52:42.041075 containerd[1509]: time="2025-02-14T00:52:42.040128994Z" level=info msg="Forcibly stopping sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\"" Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.098 [WARNING][5096] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0", GenerateName:"calico-kube-controllers-98789948-", Namespace:"calico-system", SelfLink:"", UID:"5a23e5d8-3212-4b6f-b57a-e861b760ed5a", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"98789948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"e1954425c3411e2e5577b25a1b050f6fd178518590b091ede7820e69488edd01", Pod:"calico-kube-controllers-98789948-gtrxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali348b7fd940b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.098 [INFO][5096] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.098 [INFO][5096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" iface="eth0" netns="" Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.098 [INFO][5096] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.098 [INFO][5096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.134 [INFO][5103] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" HandleID="k8s-pod-network.bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.135 [INFO][5103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.135 [INFO][5103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.145 [WARNING][5103] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" HandleID="k8s-pod-network.bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.147 [INFO][5103] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" HandleID="k8s-pod-network.bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--kube--controllers--98789948--gtrxm-eth0" Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.152 [INFO][5103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:42.157834 containerd[1509]: 2025-02-14 00:52:42.154 [INFO][5096] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7" Feb 14 00:52:42.157834 containerd[1509]: time="2025-02-14T00:52:42.157823366Z" level=info msg="TearDown network for sandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\" successfully" Feb 14 00:52:42.161703 containerd[1509]: time="2025-02-14T00:52:42.161651972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:52:42.161931 containerd[1509]: time="2025-02-14T00:52:42.161747977Z" level=info msg="RemovePodSandbox \"bed54687f6581e5588d435b5ef263528f20410546dbec1e8cecec09be395fed7\" returns successfully" Feb 14 00:52:42.162696 containerd[1509]: time="2025-02-14T00:52:42.162637189Z" level=info msg="StopPodSandbox for \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\"" Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.237 [WARNING][5121] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63e6501f-0b84-4dd0-abbf-bfa62e42e8b0", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656", Pod:"csi-node-driver-6rfjl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali036b59d302a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.237 [INFO][5121] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.237 [INFO][5121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" iface="eth0" netns="" Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.237 [INFO][5121] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.237 [INFO][5121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.292 [INFO][5127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" HandleID="k8s-pod-network.8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.293 [INFO][5127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.293 [INFO][5127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.303 [WARNING][5127] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" HandleID="k8s-pod-network.8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.304 [INFO][5127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" HandleID="k8s-pod-network.8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.306 [INFO][5127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:42.311620 containerd[1509]: 2025-02-14 00:52:42.309 [INFO][5121] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:42.313936 containerd[1509]: time="2025-02-14T00:52:42.311737279Z" level=info msg="TearDown network for sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\" successfully" Feb 14 00:52:42.313936 containerd[1509]: time="2025-02-14T00:52:42.311799466Z" level=info msg="StopPodSandbox for \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\" returns successfully" Feb 14 00:52:42.313936 containerd[1509]: time="2025-02-14T00:52:42.312633983Z" level=info msg="RemovePodSandbox for \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\"" Feb 14 00:52:42.313936 containerd[1509]: time="2025-02-14T00:52:42.312675450Z" level=info msg="Forcibly stopping sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\"" Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.374 [WARNING][5145] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63e6501f-0b84-4dd0-abbf-bfa62e42e8b0", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"4c024be0c9b70fe67a5b3fe53bc5efe12ecdfc30af541703f6eaa2b076785656", Pod:"csi-node-driver-6rfjl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali036b59d302a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.374 [INFO][5145] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.374 [INFO][5145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" iface="eth0" netns="" Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.374 [INFO][5145] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.374 [INFO][5145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.410 [INFO][5151] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" HandleID="k8s-pod-network.8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.410 [INFO][5151] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.410 [INFO][5151] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.421 [WARNING][5151] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" HandleID="k8s-pod-network.8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.422 [INFO][5151] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" HandleID="k8s-pod-network.8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Workload="srv--2zttm.gb1.brightbox.com-k8s-csi--node--driver--6rfjl-eth0" Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.424 [INFO][5151] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:42.429795 containerd[1509]: 2025-02-14 00:52:42.427 [INFO][5145] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54" Feb 14 00:52:42.431296 containerd[1509]: time="2025-02-14T00:52:42.429859267Z" level=info msg="TearDown network for sandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\" successfully" Feb 14 00:52:42.433519 containerd[1509]: time="2025-02-14T00:52:42.433471916Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:52:42.433670 containerd[1509]: time="2025-02-14T00:52:42.433558697Z" level=info msg="RemovePodSandbox \"8a9f62895aa44246eb0bc1f3315459486959ef48d784bf2d9473854c4631cf54\" returns successfully" Feb 14 00:52:42.441240 containerd[1509]: time="2025-02-14T00:52:42.441206211Z" level=info msg="StopPodSandbox for \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\"" Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.511 [WARNING][5169] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f57ee0d6-5cc3-4e98-9d2a-d8690b89184b", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69", Pod:"coredns-6f6b679f8f-sw62j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib65d5247ed5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.512 [INFO][5169] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.512 [INFO][5169] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" iface="eth0" netns="" Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.512 [INFO][5169] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.512 [INFO][5169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.547 [INFO][5175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" HandleID="k8s-pod-network.2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.547 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.547 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.557 [WARNING][5175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" HandleID="k8s-pod-network.2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.557 [INFO][5175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" HandleID="k8s-pod-network.2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.559 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:42.564219 containerd[1509]: 2025-02-14 00:52:42.561 [INFO][5169] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:42.564219 containerd[1509]: time="2025-02-14T00:52:42.564174553Z" level=info msg="TearDown network for sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\" successfully" Feb 14 00:52:42.564219 containerd[1509]: time="2025-02-14T00:52:42.564213822Z" level=info msg="StopPodSandbox for \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\" returns successfully" Feb 14 00:52:42.566484 containerd[1509]: time="2025-02-14T00:52:42.565140184Z" level=info msg="RemovePodSandbox for \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\"" Feb 14 00:52:42.566484 containerd[1509]: time="2025-02-14T00:52:42.565192966Z" level=info msg="Forcibly stopping sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\"" Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.617 [WARNING][5193] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f57ee0d6-5cc3-4e98-9d2a-d8690b89184b", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"7df5415bca5c56b00bc4b8bc3c276ed762a30f44d72fba70ed7842e5447bcb69", Pod:"coredns-6f6b679f8f-sw62j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib65d5247ed5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.618 [INFO][5193] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.618 [INFO][5193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" iface="eth0" netns="" Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.618 [INFO][5193] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.618 [INFO][5193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.651 [INFO][5200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" HandleID="k8s-pod-network.2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.652 [INFO][5200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.652 [INFO][5200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.661 [WARNING][5200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" HandleID="k8s-pod-network.2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.661 [INFO][5200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" HandleID="k8s-pod-network.2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Workload="srv--2zttm.gb1.brightbox.com-k8s-coredns--6f6b679f8f--sw62j-eth0" Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.663 [INFO][5200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:42.667817 containerd[1509]: 2025-02-14 00:52:42.665 [INFO][5193] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6" Feb 14 00:52:42.669557 containerd[1509]: time="2025-02-14T00:52:42.667843357Z" level=info msg="TearDown network for sandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\" successfully" Feb 14 00:52:42.671476 containerd[1509]: time="2025-02-14T00:52:42.671435656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:52:42.671568 containerd[1509]: time="2025-02-14T00:52:42.671548402Z" level=info msg="RemovePodSandbox \"2b461d41f12fc9fdce23bffbf5897ddf3fdd6b8b584f4736d0e21870e6a158b6\" returns successfully" Feb 14 00:52:42.672347 containerd[1509]: time="2025-02-14T00:52:42.672242698Z" level=info msg="StopPodSandbox for \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\"" Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.728 [WARNING][5218] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0", GenerateName:"calico-apiserver-77fc7c7db4-", Namespace:"calico-apiserver", SelfLink:"", UID:"899cb3f5-312b-45b1-b391-beaf3be22e8b", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc7c7db4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e", Pod:"calico-apiserver-77fc7c7db4-nz6x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali312789ec643", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.729 [INFO][5218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.729 [INFO][5218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" iface="eth0" netns="" Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.729 [INFO][5218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.729 [INFO][5218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.768 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" HandleID="k8s-pod-network.e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.768 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.770 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.779 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" HandleID="k8s-pod-network.e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.779 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" HandleID="k8s-pod-network.e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.781 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:42.789867 containerd[1509]: 2025-02-14 00:52:42.786 [INFO][5218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:42.789867 containerd[1509]: time="2025-02-14T00:52:42.789788408Z" level=info msg="TearDown network for sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\" successfully" Feb 14 00:52:42.789867 containerd[1509]: time="2025-02-14T00:52:42.789821816Z" level=info msg="StopPodSandbox for \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\" returns successfully" Feb 14 00:52:42.791422 containerd[1509]: time="2025-02-14T00:52:42.791330223Z" level=info msg="RemovePodSandbox for \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\"" Feb 14 00:52:42.791422 containerd[1509]: time="2025-02-14T00:52:42.791368805Z" level=info msg="Forcibly stopping sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\"" Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.854 [WARNING][5242] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0", GenerateName:"calico-apiserver-77fc7c7db4-", Namespace:"calico-apiserver", SelfLink:"", UID:"899cb3f5-312b-45b1-b391-beaf3be22e8b", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fc7c7db4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2zttm.gb1.brightbox.com", ContainerID:"548e03b3046556a8c064017217c14f6bc0dca3fa8cf53b765530e330652cd99e", Pod:"calico-apiserver-77fc7c7db4-nz6x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali312789ec643", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.854 [INFO][5242] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.854 [INFO][5242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" iface="eth0" netns="" Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.854 [INFO][5242] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.854 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.890 [INFO][5248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" HandleID="k8s-pod-network.e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.890 [INFO][5248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.890 [INFO][5248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.900 [WARNING][5248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" HandleID="k8s-pod-network.e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.900 [INFO][5248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" HandleID="k8s-pod-network.e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Workload="srv--2zttm.gb1.brightbox.com-k8s-calico--apiserver--77fc7c7db4--nz6x9-eth0" Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.902 [INFO][5248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:52:42.907513 containerd[1509]: 2025-02-14 00:52:42.904 [INFO][5242] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f" Feb 14 00:52:42.907513 containerd[1509]: time="2025-02-14T00:52:42.907462415Z" level=info msg="TearDown network for sandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\" successfully" Feb 14 00:52:42.912421 containerd[1509]: time="2025-02-14T00:52:42.912339998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:52:42.912669 containerd[1509]: time="2025-02-14T00:52:42.912478697Z" level=info msg="RemovePodSandbox \"e945b46d239a9b62980a9af200e447543852944797d61d24cb9339a05ffaba1f\" returns successfully" Feb 14 00:52:42.960176 sshd[4954]: PAM: Permission denied for root from 218.92.0.226 Feb 14 00:52:43.353463 sshd[5254]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.226 user=root Feb 14 00:52:45.189362 sshd[4954]: PAM: Permission denied for root from 218.92.0.226 Feb 14 00:52:45.577853 sshd[5256]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.226 user=root Feb 14 00:52:46.633530 kubelet[2641]: I0214 00:52:46.633356 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 00:52:46.844226 systemd[1]: Started sshd@10-10.230.17.110:22-147.75.109.163:50388.service - OpenSSH per-connection server daemon (147.75.109.163:50388). Feb 14 00:52:47.691724 sshd[4954]: PAM: Permission denied for root from 218.92.0.226 Feb 14 00:52:47.776525 sshd[5274]: Accepted publickey for core from 147.75.109.163 port 50388 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:52:47.781033 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:52:47.792521 systemd-logind[1486]: New session 10 of user core. Feb 14 00:52:47.798193 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 14 00:52:47.884729 sshd[4954]: Received disconnect from 218.92.0.226 port 17210:11: [preauth] Feb 14 00:52:47.884729 sshd[4954]: Disconnected from authenticating user root 218.92.0.226 port 17210 [preauth] Feb 14 00:52:47.887882 systemd[1]: sshd@9-10.230.17.110:22-218.92.0.226:17210.service: Deactivated successfully. Feb 14 00:52:48.990080 sshd[5274]: pam_unix(sshd:session): session closed for user core Feb 14 00:52:48.995882 systemd[1]: sshd@10-10.230.17.110:22-147.75.109.163:50388.service: Deactivated successfully. Feb 14 00:52:48.998519 systemd[1]: session-10.scope: Deactivated successfully. Feb 14 00:52:49.000819 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Feb 14 00:52:49.003299 systemd-logind[1486]: Removed session 10. Feb 14 00:52:54.147857 systemd[1]: Started sshd@11-10.230.17.110:22-147.75.109.163:54796.service - OpenSSH per-connection server daemon (147.75.109.163:54796). Feb 14 00:52:55.099009 sshd[5316]: Accepted publickey for core from 147.75.109.163 port 54796 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:52:55.101088 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:52:55.115329 systemd-logind[1486]: New session 11 of user core. Feb 14 00:52:55.118610 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 14 00:52:55.940256 sshd[5316]: pam_unix(sshd:session): session closed for user core Feb 14 00:52:55.947221 systemd[1]: sshd@11-10.230.17.110:22-147.75.109.163:54796.service: Deactivated successfully. Feb 14 00:52:55.953265 systemd[1]: session-11.scope: Deactivated successfully. Feb 14 00:52:55.956803 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Feb 14 00:52:55.959568 systemd-logind[1486]: Removed session 11. Feb 14 00:53:01.100926 systemd[1]: Started sshd@12-10.230.17.110:22-147.75.109.163:34022.service - OpenSSH per-connection server daemon (147.75.109.163:34022). Feb 14 00:53:02.012627 sshd[5330]: Accepted publickey for core from 147.75.109.163 port 34022 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:02.015238 sshd[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:02.023227 systemd-logind[1486]: New session 12 of user core. Feb 14 00:53:02.030185 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 14 00:53:02.756833 sshd[5330]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:02.761825 systemd[1]: sshd@12-10.230.17.110:22-147.75.109.163:34022.service: Deactivated successfully. Feb 14 00:53:02.762600 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Feb 14 00:53:02.765075 systemd[1]: session-12.scope: Deactivated successfully. Feb 14 00:53:02.767077 systemd-logind[1486]: Removed session 12. Feb 14 00:53:02.915706 systemd[1]: Started sshd@13-10.230.17.110:22-147.75.109.163:34036.service - OpenSSH per-connection server daemon (147.75.109.163:34036). Feb 14 00:53:03.619562 systemd[1]: run-containerd-runc-k8s.io-4b015b1b2753139f8d3e0ad84069f7c74dede7db769de25311cd4bf04be5f9b9-runc.HT5DOh.mount: Deactivated successfully. Feb 14 00:53:03.802683 sshd[5344]: Accepted publickey for core from 147.75.109.163 port 34036 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:03.804961 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:03.813564 systemd-logind[1486]: New session 13 of user core. Feb 14 00:53:03.821725 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 14 00:53:04.616087 sshd[5344]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:04.621487 systemd[1]: sshd@13-10.230.17.110:22-147.75.109.163:34036.service: Deactivated successfully. Feb 14 00:53:04.624142 systemd[1]: session-13.scope: Deactivated successfully. Feb 14 00:53:04.626495 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Feb 14 00:53:04.628321 systemd-logind[1486]: Removed session 13. Feb 14 00:53:04.775946 systemd[1]: Started sshd@14-10.230.17.110:22-147.75.109.163:34040.service - OpenSSH per-connection server daemon (147.75.109.163:34040). Feb 14 00:53:05.684532 sshd[5373]: Accepted publickey for core from 147.75.109.163 port 34040 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:05.686791 sshd[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:05.695706 systemd-logind[1486]: New session 14 of user core. Feb 14 00:53:05.702801 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 14 00:53:06.401562 sshd[5373]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:06.407594 systemd[1]: sshd@14-10.230.17.110:22-147.75.109.163:34040.service: Deactivated successfully. Feb 14 00:53:06.410797 systemd[1]: session-14.scope: Deactivated successfully. Feb 14 00:53:06.413118 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Feb 14 00:53:06.414630 systemd-logind[1486]: Removed session 14. Feb 14 00:53:11.556442 systemd[1]: Started sshd@15-10.230.17.110:22-147.75.109.163:48118.service - OpenSSH per-connection server daemon (147.75.109.163:48118). Feb 14 00:53:12.482884 sshd[5394]: Accepted publickey for core from 147.75.109.163 port 48118 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:12.485852 sshd[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:12.492968 systemd-logind[1486]: New session 15 of user core. Feb 14 00:53:12.497600 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 14 00:53:13.247241 sshd[5394]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:13.254920 systemd[1]: sshd@15-10.230.17.110:22-147.75.109.163:48118.service: Deactivated successfully. Feb 14 00:53:13.257423 systemd[1]: session-15.scope: Deactivated successfully. Feb 14 00:53:13.258513 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Feb 14 00:53:13.261111 systemd-logind[1486]: Removed session 15. Feb 14 00:53:18.408778 systemd[1]: Started sshd@16-10.230.17.110:22-147.75.109.163:48130.service - OpenSSH per-connection server daemon (147.75.109.163:48130). Feb 14 00:53:19.320038 sshd[5415]: Accepted publickey for core from 147.75.109.163 port 48130 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:19.323888 sshd[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:19.332718 systemd-logind[1486]: New session 16 of user core. Feb 14 00:53:19.339781 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 14 00:53:20.088519 sshd[5415]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:20.092684 systemd[1]: sshd@16-10.230.17.110:22-147.75.109.163:48130.service: Deactivated successfully. Feb 14 00:53:20.096181 systemd[1]: session-16.scope: Deactivated successfully. Feb 14 00:53:20.098668 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Feb 14 00:53:20.100236 systemd-logind[1486]: Removed session 16. Feb 14 00:53:20.779707 systemd[1]: run-containerd-runc-k8s.io-ebb74de28933bad69675c5be45bd95494255189b6c94cc5587e166192e69f3ca-runc.6RUiIJ.mount: Deactivated successfully. Feb 14 00:53:25.245852 systemd[1]: Started sshd@17-10.230.17.110:22-147.75.109.163:34390.service - OpenSSH per-connection server daemon (147.75.109.163:34390). Feb 14 00:53:26.197473 sshd[5449]: Accepted publickey for core from 147.75.109.163 port 34390 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:26.200022 sshd[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:26.209322 systemd-logind[1486]: New session 17 of user core. Feb 14 00:53:26.215605 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 14 00:53:27.137444 sshd[5449]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:27.148112 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Feb 14 00:53:27.150600 systemd[1]: sshd@17-10.230.17.110:22-147.75.109.163:34390.service: Deactivated successfully. Feb 14 00:53:27.156402 systemd[1]: session-17.scope: Deactivated successfully. Feb 14 00:53:27.159451 systemd-logind[1486]: Removed session 17. Feb 14 00:53:27.298504 systemd[1]: Started sshd@18-10.230.17.110:22-147.75.109.163:34398.service - OpenSSH per-connection server daemon (147.75.109.163:34398). Feb 14 00:53:28.195495 sshd[5463]: Accepted publickey for core from 147.75.109.163 port 34398 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:28.197968 sshd[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:28.205783 systemd-logind[1486]: New session 18 of user core. Feb 14 00:53:28.211621 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 14 00:53:29.290404 sshd[5463]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:29.299105 systemd[1]: sshd@18-10.230.17.110:22-147.75.109.163:34398.service: Deactivated successfully. Feb 14 00:53:29.303414 systemd[1]: session-18.scope: Deactivated successfully. Feb 14 00:53:29.306414 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Feb 14 00:53:29.308214 systemd-logind[1486]: Removed session 18. Feb 14 00:53:29.453893 systemd[1]: Started sshd@19-10.230.17.110:22-147.75.109.163:34404.service - OpenSSH per-connection server daemon (147.75.109.163:34404). Feb 14 00:53:30.387686 sshd[5475]: Accepted publickey for core from 147.75.109.163 port 34404 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:30.390113 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:30.401030 systemd-logind[1486]: New session 19 of user core. Feb 14 00:53:30.406709 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 14 00:53:34.090327 sshd[5475]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:34.099070 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Feb 14 00:53:34.100020 systemd[1]: sshd@19-10.230.17.110:22-147.75.109.163:34404.service: Deactivated successfully. Feb 14 00:53:34.103377 systemd[1]: session-19.scope: Deactivated successfully. Feb 14 00:53:34.106275 systemd-logind[1486]: Removed session 19. Feb 14 00:53:34.244497 systemd[1]: Started sshd@20-10.230.17.110:22-147.75.109.163:53750.service - OpenSSH per-connection server daemon (147.75.109.163:53750). Feb 14 00:53:35.182085 sshd[5515]: Accepted publickey for core from 147.75.109.163 port 53750 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:35.185632 sshd[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:35.193772 systemd-logind[1486]: New session 20 of user core. Feb 14 00:53:35.199758 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 14 00:53:36.640063 sshd[5515]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:36.653314 systemd[1]: sshd@20-10.230.17.110:22-147.75.109.163:53750.service: Deactivated successfully. Feb 14 00:53:36.656208 systemd[1]: session-20.scope: Deactivated successfully. Feb 14 00:53:36.657369 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Feb 14 00:53:36.659472 systemd-logind[1486]: Removed session 20. Feb 14 00:53:36.807591 systemd[1]: Started sshd@21-10.230.17.110:22-147.75.109.163:53764.service - OpenSSH per-connection server daemon (147.75.109.163:53764). Feb 14 00:53:37.705659 sshd[5547]: Accepted publickey for core from 147.75.109.163 port 53764 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:37.707993 sshd[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:37.714743 systemd-logind[1486]: New session 21 of user core. Feb 14 00:53:37.725606 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 14 00:53:38.552621 sshd[5547]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:38.558584 systemd[1]: sshd@21-10.230.17.110:22-147.75.109.163:53764.service: Deactivated successfully. Feb 14 00:53:38.561762 systemd[1]: session-21.scope: Deactivated successfully. Feb 14 00:53:38.563060 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Feb 14 00:53:38.564661 systemd-logind[1486]: Removed session 21. Feb 14 00:53:43.709708 systemd[1]: Started sshd@22-10.230.17.110:22-147.75.109.163:54644.service - OpenSSH per-connection server daemon (147.75.109.163:54644). Feb 14 00:53:44.611954 sshd[5565]: Accepted publickey for core from 147.75.109.163 port 54644 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:44.614364 sshd[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:44.620921 systemd-logind[1486]: New session 22 of user core. Feb 14 00:53:44.627581 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 14 00:53:45.327530 sshd[5565]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:45.332976 systemd[1]: sshd@22-10.230.17.110:22-147.75.109.163:54644.service: Deactivated successfully. Feb 14 00:53:45.335963 systemd[1]: session-22.scope: Deactivated successfully. Feb 14 00:53:45.337198 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Feb 14 00:53:45.339046 systemd-logind[1486]: Removed session 22. Feb 14 00:53:50.486723 systemd[1]: Started sshd@23-10.230.17.110:22-147.75.109.163:44186.service - OpenSSH per-connection server daemon (147.75.109.163:44186). Feb 14 00:53:50.836002 systemd[1]: run-containerd-runc-k8s.io-ebb74de28933bad69675c5be45bd95494255189b6c94cc5587e166192e69f3ca-runc.CFjvAS.mount: Deactivated successfully. Feb 14 00:53:51.419952 sshd[5588]: Accepted publickey for core from 147.75.109.163 port 44186 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:51.423098 sshd[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:51.430750 systemd-logind[1486]: New session 23 of user core. Feb 14 00:53:51.434628 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 14 00:53:52.136535 sshd[5588]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:52.143495 systemd[1]: sshd@23-10.230.17.110:22-147.75.109.163:44186.service: Deactivated successfully. Feb 14 00:53:52.144181 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Feb 14 00:53:52.146155 systemd[1]: session-23.scope: Deactivated successfully. Feb 14 00:53:52.147952 systemd-logind[1486]: Removed session 23. Feb 14 00:53:57.295802 systemd[1]: Started sshd@24-10.230.17.110:22-147.75.109.163:44192.service - OpenSSH per-connection server daemon (147.75.109.163:44192). Feb 14 00:53:58.216445 sshd[5634]: Accepted publickey for core from 147.75.109.163 port 44192 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:53:58.218976 sshd[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:53:58.226135 systemd-logind[1486]: New session 24 of user core. Feb 14 00:53:58.234632 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 14 00:53:58.931626 sshd[5634]: pam_unix(sshd:session): session closed for user core Feb 14 00:53:58.937159 systemd[1]: sshd@24-10.230.17.110:22-147.75.109.163:44192.service: Deactivated successfully. Feb 14 00:53:58.939573 systemd[1]: session-24.scope: Deactivated successfully. Feb 14 00:53:58.940569 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. Feb 14 00:53:58.942266 systemd-logind[1486]: Removed session 24.