Jan 15 13:49:50.054672 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 15 13:49:50.054713 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 13:49:50.054727 kernel: BIOS-provided physical RAM map: Jan 15 13:49:50.054743 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 15 13:49:50.054752 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 15 13:49:50.054762 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 15 13:49:50.054773 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 15 13:49:50.054783 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 15 13:49:50.054793 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 15 13:49:50.054803 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 15 13:49:50.054813 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 13:49:50.054835 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 15 13:49:50.054851 kernel: NX (Execute Disable) protection: active Jan 15 13:49:50.054861 kernel: APIC: Static calls initialized Jan 15 13:49:50.054874 kernel: SMBIOS 2.8 present. Jan 15 13:49:50.054885 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 15 13:49:50.054897 kernel: Hypervisor detected: KVM Jan 15 13:49:50.054924 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 15 13:49:50.054937 kernel: kvm-clock: using sched offset of 4430261702 cycles Jan 15 13:49:50.054949 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 13:49:50.054961 kernel: tsc: Detected 2499.998 MHz processor Jan 15 13:49:50.054972 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 15 13:49:50.054984 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 15 13:49:50.054995 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 15 13:49:50.055007 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 15 13:49:50.055018 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 15 13:49:50.055035 kernel: Using GB pages for direct mapping Jan 15 13:49:50.055047 kernel: ACPI: Early table checksum verification disabled Jan 15 13:49:50.055059 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 15 13:49:50.055070 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:49:50.055082 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:49:50.055093 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:49:50.055104 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 15 13:49:50.055116 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:49:50.055127 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:49:50.055143 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:49:50.055155 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:49:50.055166 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 15 13:49:50.055178 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 15 13:49:50.055189 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 15 13:49:50.055207 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 15 13:49:50.055219 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 15 13:49:50.055260 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 15 13:49:50.055272 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 15 13:49:50.055287 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 15 13:49:50.055299 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 15 13:49:50.055311 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 15 13:49:50.055322 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 15 13:49:50.055334 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 15 13:49:50.055352 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 15 13:49:50.055365 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 15 13:49:50.055376 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 15 13:49:50.055388 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 15 13:49:50.055400 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 15 13:49:50.055412 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 15 13:49:50.055424 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 15 13:49:50.055435 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 15 13:49:50.055447 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 15 13:49:50.055459 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 15 13:49:50.055475 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 15 13:49:50.055487 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 15 13:49:50.055499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 15 13:49:50.055511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 15 13:49:50.055523 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 15 13:49:50.055539 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 15 13:49:50.055551 kernel: Zone ranges: Jan 15 13:49:50.055564 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 15 13:49:50.055576 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 15 13:49:50.055592 kernel: Normal empty Jan 15 13:49:50.055604 kernel: Movable zone start for each node Jan 15 13:49:50.055616 kernel: Early memory node ranges Jan 15 13:49:50.055628 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 15 13:49:50.055652 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 15 13:49:50.055663 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 15 13:49:50.055675 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 13:49:50.055686 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 15 13:49:50.055698 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 15 13:49:50.055710 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 15 13:49:50.055726 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 15 13:49:50.055738 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 15 13:49:50.055749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 15 13:49:50.055774 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 15 13:49:50.055785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 15 13:49:50.055796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 15 13:49:50.055807 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 15 13:49:50.055818 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 15 13:49:50.055829 kernel: TSC deadline timer available Jan 15 13:49:50.055845 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 15 13:49:50.055857 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 15 13:49:50.055868 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 15 13:49:50.055879 kernel: Booting paravirtualized kernel on KVM Jan 15 13:49:50.055903 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 15 13:49:50.055926 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 15 13:49:50.055939 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 15 13:49:50.055950 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 15 13:49:50.055968 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 15 13:49:50.055980 kernel: kvm-guest: PV spinlocks enabled Jan 15 13:49:50.055992 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 15 13:49:50.056006 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 13:49:50.056019 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 13:49:50.056030 kernel: random: crng init done Jan 15 13:49:50.056042 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 13:49:50.056055 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 15 13:49:50.056072 kernel: Fallback order for Node 0: 0 Jan 15 13:49:50.056084 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 15 13:49:50.056096 kernel: Policy zone: DMA32 Jan 15 13:49:50.056108 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 13:49:50.056120 kernel: software IO TLB: area num 16. Jan 15 13:49:50.056133 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 194824K reserved, 0K cma-reserved) Jan 15 13:49:50.056145 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 15 13:49:50.056157 kernel: Kernel/User page tables isolation: enabled Jan 15 13:49:50.056169 kernel: ftrace: allocating 37918 entries in 149 pages Jan 15 13:49:50.056195 kernel: ftrace: allocated 149 pages with 4 groups Jan 15 13:49:50.056207 kernel: Dynamic Preempt: voluntary Jan 15 13:49:50.056219 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 13:49:50.059282 kernel: rcu: RCU event tracing is enabled. Jan 15 13:49:50.059301 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 15 13:49:50.059315 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 13:49:50.059343 kernel: Rude variant of Tasks RCU enabled. Jan 15 13:49:50.059361 kernel: Tracing variant of Tasks RCU enabled. Jan 15 13:49:50.059374 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 13:49:50.059387 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 15 13:49:50.059399 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 15 13:49:50.059412 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 13:49:50.059429 kernel: Console: colour VGA+ 80x25 Jan 15 13:49:50.059442 kernel: printk: console [tty0] enabled Jan 15 13:49:50.059455 kernel: printk: console [ttyS0] enabled Jan 15 13:49:50.059468 kernel: ACPI: Core revision 20230628 Jan 15 13:49:50.059481 kernel: APIC: Switch to symmetric I/O mode setup Jan 15 13:49:50.059498 kernel: x2apic enabled Jan 15 13:49:50.059511 kernel: APIC: Switched APIC routing to: physical x2apic Jan 15 13:49:50.059524 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 15 13:49:50.059537 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 15 13:49:50.059550 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 15 13:49:50.059562 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 15 13:49:50.059575 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 15 13:49:50.059587 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 15 13:49:50.059603 kernel: Spectre V2 : Mitigation: Retpolines Jan 15 13:49:50.059616 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 15 13:49:50.059633 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 15 13:49:50.059646 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 15 13:49:50.059664 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 15 13:49:50.059676 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 15 13:49:50.059689 kernel: MDS: Mitigation: Clear CPU buffers Jan 15 13:49:50.059701 kernel: MMIO Stale Data: Unknown: No mitigations Jan 15 13:49:50.059714 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 15 13:49:50.059741 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 15 13:49:50.059756 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 15 13:49:50.059768 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 15 13:49:50.059781 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 15 13:49:50.059810 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 15 13:49:50.059823 kernel: Freeing SMP alternatives memory: 32K Jan 15 13:49:50.059835 kernel: pid_max: default: 32768 minimum: 301 Jan 15 13:49:50.059848 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 13:49:50.059860 kernel: landlock: Up and running. Jan 15 13:49:50.059873 kernel: SELinux: Initializing. Jan 15 13:49:50.059885 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 15 13:49:50.059898 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 15 13:49:50.059922 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 15 13:49:50.059936 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 13:49:50.059948 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 13:49:50.059967 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 13:49:50.059980 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 15 13:49:50.059993 kernel: signal: max sigframe size: 1776 Jan 15 13:49:50.060005 kernel: rcu: Hierarchical SRCU implementation. Jan 15 13:49:50.060018 kernel: rcu: Max phase no-delay instances is 400. Jan 15 13:49:50.060031 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 15 13:49:50.060043 kernel: smp: Bringing up secondary CPUs ... Jan 15 13:49:50.060056 kernel: smpboot: x86: Booting SMP configuration: Jan 15 13:49:50.060069 kernel: .... node #0, CPUs: #1 Jan 15 13:49:50.060086 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 15 13:49:50.060099 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 13:49:50.060112 kernel: smpboot: Max logical packages: 16 Jan 15 13:49:50.060124 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 15 13:49:50.060137 kernel: devtmpfs: initialized Jan 15 13:49:50.060149 kernel: x86/mm: Memory block size: 128MB Jan 15 13:49:50.060162 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 13:49:50.060175 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 15 13:49:50.060190 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 13:49:50.060208 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 13:49:50.060221 kernel: audit: initializing netlink subsys (disabled) Jan 15 13:49:50.060250 kernel: audit: type=2000 audit(1736948988.356:1): state=initialized audit_enabled=0 res=1 Jan 15 13:49:50.062256 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 13:49:50.062279 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 15 13:49:50.062293 kernel: cpuidle: using governor menu Jan 15 13:49:50.062306 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 13:49:50.062324 kernel: dca service started, version 1.12.1 Jan 15 13:49:50.062337 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 15 13:49:50.062358 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 15 13:49:50.062371 kernel: PCI: Using configuration type 1 for base access Jan 15 13:49:50.062384 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 15 13:49:50.062397 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 13:49:50.062409 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 13:49:50.062422 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 13:49:50.062435 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 13:49:50.062448 kernel: ACPI: Added _OSI(Module Device) Jan 15 13:49:50.062460 kernel: ACPI: Added _OSI(Processor Device) Jan 15 13:49:50.062478 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 13:49:50.062491 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 13:49:50.062504 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 13:49:50.062516 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 15 13:49:50.062529 kernel: ACPI: Interpreter enabled Jan 15 13:49:50.062541 kernel: ACPI: PM: (supports S0 S5) Jan 15 13:49:50.062554 kernel: ACPI: Using IOAPIC for interrupt routing Jan 15 13:49:50.062566 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 15 13:49:50.062579 kernel: PCI: Using E820 reservations for host bridge windows Jan 15 13:49:50.062597 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 15 13:49:50.062610 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 15 13:49:50.062906 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 15 13:49:50.063101 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 15 13:49:50.064342 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 15 13:49:50.064366 kernel: PCI host bridge to bus 0000:00 Jan 15 13:49:50.064576 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 15 13:49:50.064740 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 15 13:49:50.064888 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 15 13:49:50.065055 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 15 13:49:50.065206 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 15 13:49:50.066391 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 15 13:49:50.066544 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 15 13:49:50.066764 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 15 13:49:50.066986 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 15 13:49:50.067158 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 15 13:49:50.068395 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 15 13:49:50.068565 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 15 13:49:50.068731 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 15 13:49:50.068934 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 15 13:49:50.069111 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 15 13:49:50.069320 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 15 13:49:50.069501 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 15 13:49:50.069694 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 15 13:49:50.069872 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 15 13:49:50.070074 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 15 13:49:50.073518 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 15 13:49:50.073727 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 15 13:49:50.073930 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 15 13:49:50.074113 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 15 13:49:50.074318 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 15 13:49:50.074509 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 15 13:49:50.074680 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 15 13:49:50.074853 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 15 13:49:50.075030 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 15 13:49:50.075259 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 15 13:49:50.075430 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 15 13:49:50.075601 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 15 13:49:50.075762 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 15 13:49:50.075977 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 15 13:49:50.076157 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 15 13:49:50.076349 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 15 13:49:50.076527 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 15 13:49:50.076705 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 15 13:49:50.076957 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 15 13:49:50.077128 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 15 13:49:50.083386 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 15 13:49:50.083567 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 15 13:49:50.083742 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 15 13:49:50.083943 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 15 13:49:50.084109 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 15 13:49:50.084329 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 15 13:49:50.084537 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 15 13:49:50.084727 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 15 13:49:50.084896 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 15 13:49:50.085075 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 13:49:50.089302 kernel: pci_bus 0000:02: extended config space not accessible Jan 15 13:49:50.089570 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 15 13:49:50.089762 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 15 13:49:50.089949 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 15 13:49:50.090120 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 15 13:49:50.090338 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 15 13:49:50.090516 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 15 13:49:50.090701 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 15 13:49:50.090863 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 15 13:49:50.091048 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 13:49:50.093302 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 15 13:49:50.093501 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 15 13:49:50.093689 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 15 13:49:50.093863 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 15 13:49:50.094050 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 13:49:50.096269 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 15 13:49:50.096455 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 15 13:49:50.096661 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 13:49:50.096847 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 15 13:49:50.097039 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 15 13:49:50.097203 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 13:49:50.100524 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 15 13:49:50.100722 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 15 13:49:50.100882 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 13:49:50.101082 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 15 13:49:50.101282 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 15 13:49:50.101451 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 13:49:50.101623 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 15 13:49:50.101807 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 15 13:49:50.102011 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 13:49:50.102031 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 15 13:49:50.102046 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 15 13:49:50.102059 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 15 13:49:50.102082 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 15 13:49:50.102095 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 15 13:49:50.102108 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 15 13:49:50.102122 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 15 13:49:50.102135 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 15 13:49:50.102147 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 15 13:49:50.102161 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 15 13:49:50.102173 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 15 13:49:50.102186 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 15 13:49:50.102205 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 15 13:49:50.102218 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 15 13:49:50.104406 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 15 13:49:50.104429 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 15 13:49:50.104443 kernel: iommu: Default domain type: Translated Jan 15 13:49:50.104456 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 15 13:49:50.104477 kernel: PCI: Using ACPI for IRQ routing Jan 15 13:49:50.104490 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 15 13:49:50.104503 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 15 13:49:50.104524 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 15 13:49:50.104704 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 15 13:49:50.104902 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 15 13:49:50.105089 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 15 13:49:50.105110 kernel: vgaarb: loaded Jan 15 13:49:50.105123 kernel: clocksource: Switched to clocksource kvm-clock Jan 15 13:49:50.105136 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 13:49:50.105150 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 13:49:50.105171 kernel: pnp: PnP ACPI init Jan 15 13:49:50.105385 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 15 13:49:50.105408 kernel: pnp: PnP ACPI: found 5 devices Jan 15 13:49:50.105421 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 15 13:49:50.105435 kernel: NET: Registered PF_INET protocol family Jan 15 13:49:50.105448 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 13:49:50.105461 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 15 13:49:50.105475 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 13:49:50.105496 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 15 13:49:50.105515 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 15 13:49:50.105530 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 15 13:49:50.105543 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 15 13:49:50.105556 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 15 13:49:50.105574 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 13:49:50.105587 kernel: NET: Registered PF_XDP protocol family Jan 15 13:49:50.105750 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 15 13:49:50.105926 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 15 13:49:50.106101 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 15 13:49:50.109302 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 15 13:49:50.109514 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 15 13:49:50.109686 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 15 13:49:50.109885 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 15 13:49:50.110094 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 15 13:49:50.110278 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 15 13:49:50.110468 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 15 13:49:50.110631 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 15 13:49:50.110805 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 15 13:49:50.111003 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 15 13:49:50.111165 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 15 13:49:50.116404 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 15 13:49:50.116590 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 15 13:49:50.116794 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 15 13:49:50.116997 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 15 13:49:50.117164 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 15 13:49:50.120390 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 15 13:49:50.120602 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 15 13:49:50.120782 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 13:49:50.120961 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 15 13:49:50.121131 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 15 13:49:50.121346 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 15 13:49:50.121512 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 13:49:50.121676 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 15 13:49:50.121839 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 15 13:49:50.122017 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 15 13:49:50.122193 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 13:49:50.122415 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 15 13:49:50.122589 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 15 13:49:50.122767 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 15 13:49:50.122955 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 13:49:50.123121 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 15 13:49:50.126334 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 15 13:49:50.126505 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 15 13:49:50.126681 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 13:49:50.126860 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 15 13:49:50.127064 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 15 13:49:50.127271 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 15 13:49:50.127438 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 13:49:50.127611 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 15 13:49:50.127781 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 15 13:49:50.127957 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 15 13:49:50.128123 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 13:49:50.130347 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 15 13:49:50.130516 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 15 13:49:50.130680 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 15 13:49:50.130860 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 13:49:50.131033 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 15 13:49:50.131182 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 15 13:49:50.132378 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 15 13:49:50.132536 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 15 13:49:50.132726 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 15 13:49:50.132868 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 15 13:49:50.133064 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 15 13:49:50.133220 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 15 13:49:50.135426 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 13:49:50.135603 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 15 13:49:50.135785 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 15 13:49:50.135953 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 15 13:49:50.136107 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 13:49:50.136291 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 15 13:49:50.136453 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 15 13:49:50.136613 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 13:49:50.136789 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 15 13:49:50.136961 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 15 13:49:50.137119 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 13:49:50.138364 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 15 13:49:50.138524 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 15 13:49:50.138716 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 13:49:50.138882 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 15 13:49:50.139059 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 15 13:49:50.139213 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 13:49:50.139415 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 15 13:49:50.139584 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 15 13:49:50.139736 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 13:49:50.139949 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 15 13:49:50.140130 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 15 13:49:50.143320 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 13:49:50.143345 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 15 13:49:50.143359 kernel: PCI: CLS 0 bytes, default 64 Jan 15 13:49:50.143373 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 15 13:49:50.143394 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 15 13:49:50.143408 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 15 13:49:50.143422 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 15 13:49:50.143436 kernel: Initialise system trusted keyrings Jan 15 13:49:50.143465 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 15 13:49:50.143480 kernel: Key type asymmetric registered Jan 15 13:49:50.143493 kernel: Asymmetric key parser 'x509' registered Jan 15 13:49:50.143507 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 15 13:49:50.143529 kernel: io scheduler mq-deadline registered Jan 15 13:49:50.143542 kernel: io scheduler kyber registered Jan 15 13:49:50.143556 kernel: io scheduler bfq registered Jan 15 13:49:50.143748 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 15 13:49:50.143944 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 15 13:49:50.144136 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:49:50.144336 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 15 13:49:50.144534 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 15 13:49:50.144823 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:49:50.145095 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 15 13:49:50.145294 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 15 13:49:50.145603 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:49:50.145792 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 15 13:49:50.146001 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 15 13:49:50.146182 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:49:50.148448 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 15 13:49:50.148636 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 15 13:49:50.148812 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:49:50.149007 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 15 13:49:50.149173 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 15 13:49:50.149354 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:49:50.149526 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 15 13:49:50.149690 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 15 13:49:50.149862 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:49:50.150055 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 15 13:49:50.150219 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 15 13:49:50.150402 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:49:50.150424 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 15 13:49:50.150439 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 15 13:49:50.150462 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 15 13:49:50.150476 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 13:49:50.150490 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 15 13:49:50.150513 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 15 13:49:50.150526 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 15 13:49:50.150540 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 15 13:49:50.150717 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 15 13:49:50.150739 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 15 13:49:50.150895 kernel: rtc_cmos 00:03: registered as rtc0 Jan 15 13:49:50.151071 kernel: rtc_cmos 00:03: setting system clock to 2025-01-15T13:49:49 UTC (1736948989) Jan 15 13:49:50.151250 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 15 13:49:50.151273 kernel: intel_pstate: CPU model not supported Jan 15 13:49:50.151287 kernel: NET: Registered PF_INET6 protocol family Jan 15 13:49:50.151301 kernel: Segment Routing with IPv6 Jan 15 13:49:50.151315 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 13:49:50.151329 kernel: NET: Registered PF_PACKET protocol family Jan 15 13:49:50.151343 kernel: Key type dns_resolver registered Jan 15 13:49:50.151364 kernel: IPI shorthand broadcast: enabled Jan 15 13:49:50.151378 kernel: sched_clock: Marking stable (1304004134, 239835455)->(1797052618, -253213029) Jan 15 13:49:50.151392 kernel: registered taskstats version 1 Jan 15 13:49:50.151406 kernel: Loading compiled-in X.509 certificates Jan 15 13:49:50.151419 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 15 13:49:50.151433 kernel: Key type .fscrypt registered Jan 15 13:49:50.151446 kernel: Key type fscrypt-provisioning registered Jan 15 13:49:50.151460 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 13:49:50.151479 kernel: ima: Allocated hash algorithm: sha1 Jan 15 13:49:50.151493 kernel: ima: No architecture policies found Jan 15 13:49:50.151506 kernel: clk: Disabling unused clocks Jan 15 13:49:50.151520 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 15 13:49:50.151533 kernel: Write protecting the kernel read-only data: 36864k Jan 15 13:49:50.151547 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 15 13:49:50.151561 kernel: Run /init as init process Jan 15 13:49:50.151575 kernel: with arguments: Jan 15 13:49:50.151598 kernel: /init Jan 15 13:49:50.151616 kernel: with environment: Jan 15 13:49:50.151629 kernel: HOME=/ Jan 15 13:49:50.151643 kernel: TERM=linux Jan 15 13:49:50.151656 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 13:49:50.151673 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 13:49:50.151690 systemd[1]: Detected virtualization kvm. Jan 15 13:49:50.151705 systemd[1]: Detected architecture x86-64. Jan 15 13:49:50.151719 systemd[1]: Running in initrd. Jan 15 13:49:50.151738 systemd[1]: No hostname configured, using default hostname. Jan 15 13:49:50.151752 systemd[1]: Hostname set to . Jan 15 13:49:50.151767 systemd[1]: Initializing machine ID from VM UUID. Jan 15 13:49:50.151781 systemd[1]: Queued start job for default target initrd.target. Jan 15 13:49:50.151795 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 13:49:50.151810 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 13:49:50.151824 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 13:49:50.151840 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 13:49:50.151872 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 13:49:50.151886 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 13:49:50.151902 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 13:49:50.151942 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 13:49:50.151969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 13:49:50.151984 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 13:49:50.151998 systemd[1]: Reached target paths.target - Path Units. Jan 15 13:49:50.152018 systemd[1]: Reached target slices.target - Slice Units. Jan 15 13:49:50.152033 systemd[1]: Reached target swap.target - Swaps. Jan 15 13:49:50.152047 systemd[1]: Reached target timers.target - Timer Units. Jan 15 13:49:50.152061 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 13:49:50.152076 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 13:49:50.152091 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 13:49:50.152105 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 13:49:50.152120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 13:49:50.152139 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 13:49:50.152154 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 13:49:50.152168 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 13:49:50.152183 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 13:49:50.152198 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 13:49:50.152212 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 13:49:50.152239 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 13:49:50.152270 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 13:49:50.152293 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 13:49:50.152315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 13:49:50.152330 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 13:49:50.152344 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 13:49:50.152360 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 13:49:50.152423 systemd-journald[201]: Collecting audit messages is disabled. Jan 15 13:49:50.152463 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 13:49:50.152479 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 13:49:50.152493 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 13:49:50.152513 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 13:49:50.152527 kernel: Bridge firewalling registered Jan 15 13:49:50.152551 systemd-journald[201]: Journal started Jan 15 13:49:50.152578 systemd-journald[201]: Runtime Journal (/run/log/journal/c97927b055ce4227aeb81132b599d380) is 4.7M, max 38.0M, 33.2M free. Jan 15 13:49:50.094576 systemd-modules-load[202]: Inserted module 'overlay' Jan 15 13:49:50.197846 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 13:49:50.148060 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 15 13:49:50.202100 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 13:49:50.209946 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 13:49:50.211973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 13:49:50.222632 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 13:49:50.226414 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 13:49:50.228326 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 13:49:50.247549 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 13:49:50.259272 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 13:49:50.260550 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 13:49:50.267455 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 13:49:50.272416 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 13:49:50.283717 dracut-cmdline[236]: dracut-dracut-053 Jan 15 13:49:50.288216 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 13:49:50.323489 systemd-resolved[237]: Positive Trust Anchors: Jan 15 13:49:50.323511 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 13:49:50.323554 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 13:49:50.333494 systemd-resolved[237]: Defaulting to hostname 'linux'. Jan 15 13:49:50.336490 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 13:49:50.337650 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 13:49:50.401283 kernel: SCSI subsystem initialized Jan 15 13:49:50.413275 kernel: Loading iSCSI transport class v2.0-870. Jan 15 13:49:50.427294 kernel: iscsi: registered transport (tcp) Jan 15 13:49:50.454602 kernel: iscsi: registered transport (qla4xxx) Jan 15 13:49:50.454666 kernel: QLogic iSCSI HBA Driver Jan 15 13:49:50.512441 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 13:49:50.531656 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 13:49:50.564862 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 13:49:50.564970 kernel: device-mapper: uevent: version 1.0.3 Jan 15 13:49:50.565788 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 13:49:50.617299 kernel: raid6: sse2x4 gen() 7289 MB/s Jan 15 13:49:50.635277 kernel: raid6: sse2x2 gen() 5072 MB/s Jan 15 13:49:50.653976 kernel: raid6: sse2x1 gen() 5165 MB/s Jan 15 13:49:50.654052 kernel: raid6: using algorithm sse2x4 gen() 7289 MB/s Jan 15 13:49:50.672989 kernel: raid6: .... xor() 4643 MB/s, rmw enabled Jan 15 13:49:50.673034 kernel: raid6: using ssse3x2 recovery algorithm Jan 15 13:49:50.700332 kernel: xor: automatically using best checksumming function avx Jan 15 13:49:50.902354 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 13:49:50.917808 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 13:49:50.925567 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 13:49:50.958841 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jan 15 13:49:50.966273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 13:49:50.975434 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 13:49:51.001595 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Jan 15 13:49:51.045208 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 13:49:51.054478 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 13:49:51.168799 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 13:49:51.181056 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 13:49:51.212427 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 13:49:51.218781 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 13:49:51.221809 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 13:49:51.224057 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 13:49:51.234657 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 13:49:51.255697 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 13:49:51.310259 kernel: cryptd: max_cpu_qlen set to 1000 Jan 15 13:49:51.321449 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 15 13:49:51.366002 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 15 13:49:51.366274 kernel: AVX version of gcm_enc/dec engaged. Jan 15 13:49:51.366298 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 13:49:51.366316 kernel: GPT:17805311 != 125829119 Jan 15 13:49:51.366338 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 13:49:51.366354 kernel: GPT:17805311 != 125829119 Jan 15 13:49:51.366384 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 13:49:51.366408 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 13:49:51.366447 kernel: AES CTR mode by8 optimization enabled Jan 15 13:49:51.339055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 13:49:51.339256 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 13:49:51.340274 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 13:49:51.341016 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 13:49:51.341174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 13:49:51.385770 kernel: ACPI: bus type USB registered Jan 15 13:49:51.341962 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 13:49:51.353563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 13:49:51.408289 kernel: usbcore: registered new interface driver usbfs Jan 15 13:49:51.408368 kernel: libata version 3.00 loaded. Jan 15 13:49:51.423251 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (473) Jan 15 13:49:51.443259 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Jan 15 13:49:51.442823 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 15 13:49:51.551613 kernel: usbcore: registered new interface driver hub Jan 15 13:49:51.551651 kernel: usbcore: registered new device driver usb Jan 15 13:49:51.551678 kernel: ahci 0000:00:1f.2: version 3.0 Jan 15 13:49:51.551963 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 15 13:49:51.551998 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 15 13:49:51.552200 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 15 13:49:51.553078 kernel: scsi host0: ahci Jan 15 13:49:51.553342 kernel: scsi host1: ahci Jan 15 13:49:51.553566 kernel: scsi host2: ahci Jan 15 13:49:51.553768 kernel: scsi host3: ahci Jan 15 13:49:51.553992 kernel: scsi host4: ahci Jan 15 13:49:51.554186 kernel: scsi host5: ahci Jan 15 13:49:51.554408 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 15 13:49:51.554430 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 15 13:49:51.554448 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 15 13:49:51.554476 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 15 13:49:51.554493 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 15 13:49:51.554511 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 15 13:49:51.557176 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 15 13:49:51.558532 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 13:49:51.576429 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 15 13:49:51.577382 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 15 13:49:51.585762 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 13:49:51.592476 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 13:49:51.602434 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 13:49:51.608146 disk-uuid[564]: Primary Header is updated. Jan 15 13:49:51.608146 disk-uuid[564]: Secondary Entries is updated. Jan 15 13:49:51.608146 disk-uuid[564]: Secondary Header is updated. Jan 15 13:49:51.613378 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 13:49:51.621259 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 13:49:51.652134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 13:49:51.797321 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 15 13:49:51.804263 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 15 13:49:51.807792 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 15 13:49:51.807831 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 15 13:49:51.808249 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 15 13:49:51.811349 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 15 13:49:51.820282 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 15 13:49:51.839589 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 15 13:49:51.839916 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 15 13:49:51.841217 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 15 13:49:51.841461 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 15 13:49:51.841694 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 15 13:49:51.841952 kernel: hub 1-0:1.0: USB hub found Jan 15 13:49:51.842192 kernel: hub 1-0:1.0: 4 ports detected Jan 15 13:49:51.842450 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 15 13:49:51.842747 kernel: hub 2-0:1.0: USB hub found Jan 15 13:49:51.843019 kernel: hub 2-0:1.0: 4 ports detected Jan 15 13:49:52.072329 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 15 13:49:52.215281 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 13:49:52.221831 kernel: usbcore: registered new interface driver usbhid Jan 15 13:49:52.221907 kernel: usbhid: USB HID core driver Jan 15 13:49:52.229244 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 15 13:49:52.229281 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 15 13:49:52.625259 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 13:49:52.625342 disk-uuid[565]: The operation has completed successfully. Jan 15 13:49:52.675399 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 13:49:52.675598 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 13:49:52.698411 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 13:49:52.704860 sh[584]: Success Jan 15 13:49:52.722276 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 15 13:49:52.783067 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 13:49:52.792364 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 13:49:52.795468 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 13:49:52.829463 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 15 13:49:52.829517 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 15 13:49:52.831798 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 13:49:52.835288 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 13:49:52.835328 kernel: BTRFS info (device dm-0): using free space tree Jan 15 13:49:52.846405 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 13:49:52.847900 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 13:49:52.857465 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 13:49:52.861025 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 13:49:52.877645 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 13:49:52.877696 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 13:49:52.879735 kernel: BTRFS info (device vda6): using free space tree Jan 15 13:49:52.885259 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 13:49:52.899406 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 13:49:52.899030 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 13:49:52.907820 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 13:49:52.914481 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 13:49:53.025794 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 13:49:53.037955 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 13:49:53.060366 ignition[687]: Ignition 2.19.0 Jan 15 13:49:53.060388 ignition[687]: Stage: fetch-offline Jan 15 13:49:53.060493 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jan 15 13:49:53.060518 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:49:53.062496 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 13:49:53.060683 ignition[687]: parsed url from cmdline: "" Jan 15 13:49:53.060690 ignition[687]: no config URL provided Jan 15 13:49:53.060700 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 13:49:53.060717 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jan 15 13:49:53.060725 ignition[687]: failed to fetch config: resource requires networking Jan 15 13:49:53.060994 ignition[687]: Ignition finished successfully Jan 15 13:49:53.079282 systemd-networkd[767]: lo: Link UP Jan 15 13:49:53.079299 systemd-networkd[767]: lo: Gained carrier Jan 15 13:49:53.081729 systemd-networkd[767]: Enumeration completed Jan 15 13:49:53.082175 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 13:49:53.082298 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 13:49:53.082304 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 13:49:53.083552 systemd[1]: Reached target network.target - Network. Jan 15 13:49:53.083579 systemd-networkd[767]: eth0: Link UP Jan 15 13:49:53.083585 systemd-networkd[767]: eth0: Gained carrier Jan 15 13:49:53.083596 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 13:49:53.092413 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 13:49:53.110571 ignition[775]: Ignition 2.19.0 Jan 15 13:49:53.110591 ignition[775]: Stage: fetch Jan 15 13:49:53.110843 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jan 15 13:49:53.110865 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:49:53.110997 ignition[775]: parsed url from cmdline: "" Jan 15 13:49:53.113320 systemd-networkd[767]: eth0: DHCPv4 address 10.230.9.202/30, gateway 10.230.9.201 acquired from 10.230.9.201 Jan 15 13:49:53.111004 ignition[775]: no config URL provided Jan 15 13:49:53.111014 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 13:49:53.111029 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jan 15 13:49:53.111260 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 15 13:49:53.111285 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 15 13:49:53.111303 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 15 13:49:53.111623 ignition[775]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 15 13:49:53.311859 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 15 13:49:53.326107 ignition[775]: GET result: OK Jan 15 13:49:53.326394 ignition[775]: parsing config with SHA512: 8d74b2c14c272cde402f28134dc6712e740c186629541c9c5b144c13fc53d8efca9777a5e4bb1de3021194016686890cdc49dbde98a65d39f1d8390f2eacb805 Jan 15 13:49:53.331861 unknown[775]: fetched base config from "system" Jan 15 13:49:53.331879 unknown[775]: fetched base config from "system" Jan 15 13:49:53.332652 ignition[775]: fetch: fetch complete Jan 15 13:49:53.331888 unknown[775]: fetched user config from "openstack" Jan 15 13:49:53.332672 ignition[775]: fetch: fetch passed Jan 15 13:49:53.334661 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 13:49:53.332734 ignition[775]: Ignition finished successfully Jan 15 13:49:53.355217 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 13:49:53.374599 ignition[782]: Ignition 2.19.0 Jan 15 13:49:53.375329 ignition[782]: Stage: kargs Jan 15 13:49:53.375604 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 15 13:49:53.375624 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:49:53.377043 ignition[782]: kargs: kargs passed Jan 15 13:49:53.379817 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 13:49:53.377107 ignition[782]: Ignition finished successfully Jan 15 13:49:53.386451 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 13:49:53.408776 ignition[788]: Ignition 2.19.0 Jan 15 13:49:53.408797 ignition[788]: Stage: disks Jan 15 13:49:53.409084 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 15 13:49:53.409105 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:49:53.412178 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 13:49:53.410572 ignition[788]: disks: disks passed Jan 15 13:49:53.414500 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 13:49:53.410642 ignition[788]: Ignition finished successfully Jan 15 13:49:53.415737 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 13:49:53.417315 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 13:49:53.418927 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 13:49:53.420453 systemd[1]: Reached target basic.target - Basic System. Jan 15 13:49:53.428889 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 13:49:53.450246 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 15 13:49:53.453863 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 13:49:53.461362 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 13:49:53.583255 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 15 13:49:53.584161 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 13:49:53.585590 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 13:49:53.594412 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 13:49:53.597207 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 13:49:53.598396 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 15 13:49:53.606608 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 15 13:49:53.615959 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Jan 15 13:49:53.615994 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 13:49:53.616022 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 13:49:53.616058 kernel: BTRFS info (device vda6): using free space tree Jan 15 13:49:53.616989 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 13:49:53.617037 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 13:49:53.621104 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 13:49:53.625258 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 13:49:53.631459 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 13:49:53.635068 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 13:49:53.709167 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 13:49:53.720114 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 15 13:49:53.728616 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 13:49:53.733696 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 13:49:53.841111 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 13:49:53.845400 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 13:49:53.849431 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 13:49:53.862535 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 13:49:53.864911 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 13:49:53.886193 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 13:49:53.901031 ignition[925]: INFO : Ignition 2.19.0 Jan 15 13:49:53.901031 ignition[925]: INFO : Stage: mount Jan 15 13:49:53.903343 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 13:49:53.903343 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:49:53.903343 ignition[925]: INFO : mount: mount passed Jan 15 13:49:53.903343 ignition[925]: INFO : Ignition finished successfully Jan 15 13:49:53.905892 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 13:49:54.228570 systemd-networkd[767]: eth0: Gained IPv6LL Jan 15 13:49:55.735580 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8272:24:19ff:fee6:9ca/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8272:24:19ff:fee6:9ca/64 assigned by NDisc. Jan 15 13:49:55.735598 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 15 13:50:00.782826 coreos-metadata[806]: Jan 15 13:50:00.782 WARN failed to locate config-drive, using the metadata service API instead Jan 15 13:50:00.805984 coreos-metadata[806]: Jan 15 13:50:00.805 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 15 13:50:00.822022 coreos-metadata[806]: Jan 15 13:50:00.821 INFO Fetch successful Jan 15 13:50:00.823436 coreos-metadata[806]: Jan 15 13:50:00.823 INFO wrote hostname srv-e1jz5.gb1.brightbox.com to /sysroot/etc/hostname Jan 15 13:50:00.830171 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 15 13:50:00.830521 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 15 13:50:00.840454 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 13:50:00.865488 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 13:50:00.880261 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Jan 15 13:50:00.880347 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 13:50:00.883393 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 13:50:00.885268 kernel: BTRFS info (device vda6): using free space tree Jan 15 13:50:00.891691 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 13:50:00.894826 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 13:50:00.931094 ignition[959]: INFO : Ignition 2.19.0 Jan 15 13:50:00.932400 ignition[959]: INFO : Stage: files Jan 15 13:50:00.934277 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 13:50:00.934277 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:50:00.936204 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 15 13:50:00.937213 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 13:50:00.937213 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 13:50:00.940689 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 13:50:00.941784 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 13:50:00.941784 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 13:50:00.941434 unknown[959]: wrote ssh authorized keys file for user: core Jan 15 13:50:00.944789 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 15 13:50:00.944789 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 15 13:50:01.136449 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 13:50:02.000050 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 15 13:50:02.002427 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 13:50:02.002427 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 13:50:02.002427 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 13:50:02.002427 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 13:50:02.002427 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 13:50:02.002427 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 13:50:02.002427 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 13:50:02.002427 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 13:50:02.002427 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 13:50:02.018857 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 13:50:02.018857 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 13:50:02.018857 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 13:50:02.018857 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 13:50:02.018857 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 15 13:50:02.566584 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 13:50:04.376035 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 13:50:04.376035 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 13:50:04.379295 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 13:50:04.380637 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 13:50:04.380637 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 13:50:04.380637 ignition[959]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 15 13:50:04.380637 ignition[959]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 13:50:04.387952 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 13:50:04.387952 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 13:50:04.387952 ignition[959]: INFO : files: files passed Jan 15 13:50:04.387952 ignition[959]: INFO : Ignition finished successfully Jan 15 13:50:04.383459 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 13:50:04.395604 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 13:50:04.399860 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 13:50:04.402298 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 13:50:04.402452 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 13:50:04.424851 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 13:50:04.424851 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 13:50:04.428719 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 13:50:04.430400 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 13:50:04.431972 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 13:50:04.443026 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 13:50:04.474758 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 13:50:04.474933 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 13:50:04.476841 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 13:50:04.478123 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 13:50:04.479692 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 13:50:04.485492 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 13:50:04.505710 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 13:50:04.517593 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 13:50:04.531468 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 13:50:04.533425 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 13:50:04.534351 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 13:50:04.536083 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 13:50:04.536283 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 13:50:04.538141 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 13:50:04.539052 systemd[1]: Stopped target basic.target - Basic System. Jan 15 13:50:04.540602 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 13:50:04.541993 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 13:50:04.543471 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 13:50:04.545034 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 13:50:04.546607 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 13:50:04.548192 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 13:50:04.549699 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 13:50:04.551222 systemd[1]: Stopped target swap.target - Swaps. Jan 15 13:50:04.552673 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 13:50:04.552883 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 13:50:04.554626 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 13:50:04.555551 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 13:50:04.557104 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 13:50:04.557326 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 13:50:04.558793 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 13:50:04.558976 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 13:50:04.561091 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 13:50:04.561287 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 13:50:04.562959 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 13:50:04.563118 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 13:50:04.574558 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 13:50:04.576341 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 13:50:04.576556 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 13:50:04.580689 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 13:50:04.583035 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 13:50:04.583224 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 13:50:04.584154 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 13:50:04.589799 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 13:50:04.602252 ignition[1012]: INFO : Ignition 2.19.0 Jan 15 13:50:04.602252 ignition[1012]: INFO : Stage: umount Jan 15 13:50:04.602252 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 13:50:04.602252 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:50:04.604639 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 13:50:04.609347 ignition[1012]: INFO : umount: umount passed Jan 15 13:50:04.609347 ignition[1012]: INFO : Ignition finished successfully Jan 15 13:50:04.604786 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 13:50:04.608119 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 13:50:04.609534 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 13:50:04.616722 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 13:50:04.616816 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 13:50:04.617626 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 13:50:04.617693 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 13:50:04.619153 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 13:50:04.619244 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 13:50:04.621411 systemd[1]: Stopped target network.target - Network. Jan 15 13:50:04.622136 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 13:50:04.622218 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 13:50:04.624485 systemd[1]: Stopped target paths.target - Path Units. Jan 15 13:50:04.625252 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 13:50:04.626614 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 13:50:04.627684 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 13:50:04.628335 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 13:50:04.629015 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 13:50:04.629091 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 13:50:04.629830 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 13:50:04.629891 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 13:50:04.631357 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 13:50:04.631435 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 13:50:04.632889 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 13:50:04.632954 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 13:50:04.634533 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 13:50:04.636210 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 13:50:04.639571 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 13:50:04.640408 systemd-networkd[767]: eth0: DHCPv6 lease lost Jan 15 13:50:04.642569 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 13:50:04.642721 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 13:50:04.643788 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 13:50:04.643943 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 13:50:04.646445 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 13:50:04.646608 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 13:50:04.648356 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 13:50:04.648436 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 13:50:04.657625 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 13:50:04.661015 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 13:50:04.661097 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 13:50:04.662977 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 13:50:04.665027 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 13:50:04.665257 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 13:50:04.675735 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 13:50:04.675990 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 13:50:04.682392 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 13:50:04.682592 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 13:50:04.684839 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 13:50:04.684942 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 13:50:04.686718 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 13:50:04.686781 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 13:50:04.688332 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 13:50:04.688417 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 13:50:04.690728 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 13:50:04.690798 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 13:50:04.692223 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 13:50:04.692320 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 13:50:04.708459 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 13:50:04.711617 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 13:50:04.711702 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 13:50:04.713242 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 13:50:04.713318 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 13:50:04.714667 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 13:50:04.714737 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 13:50:04.718179 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 13:50:04.718284 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 13:50:04.719778 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 13:50:04.719848 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 13:50:04.722775 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 13:50:04.722910 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 13:50:04.724310 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 13:50:04.731461 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 13:50:04.742526 systemd[1]: Switching root. Jan 15 13:50:04.771524 systemd-journald[201]: Journal stopped Jan 15 13:50:06.220180 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 15 13:50:06.220303 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 13:50:06.220345 kernel: SELinux: policy capability open_perms=1 Jan 15 13:50:06.220366 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 13:50:06.220393 kernel: SELinux: policy capability always_check_network=0 Jan 15 13:50:06.220412 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 13:50:06.220438 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 13:50:06.220457 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 13:50:06.220488 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 13:50:06.220517 kernel: audit: type=1403 audit(1736949005.029:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 13:50:06.220557 systemd[1]: Successfully loaded SELinux policy in 47.956ms. Jan 15 13:50:06.220593 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.665ms. Jan 15 13:50:06.220616 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 13:50:06.220637 systemd[1]: Detected virtualization kvm. Jan 15 13:50:06.220664 systemd[1]: Detected architecture x86-64. Jan 15 13:50:06.220685 systemd[1]: Detected first boot. Jan 15 13:50:06.220705 systemd[1]: Hostname set to . Jan 15 13:50:06.220731 systemd[1]: Initializing machine ID from VM UUID. Jan 15 13:50:06.220752 zram_generator::config[1054]: No configuration found. Jan 15 13:50:06.220787 systemd[1]: Populated /etc with preset unit settings. Jan 15 13:50:06.220809 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 13:50:06.220829 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 13:50:06.220849 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 13:50:06.220869 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 13:50:06.220889 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 13:50:06.220910 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 13:50:06.220932 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 13:50:06.220966 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 13:50:06.220989 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 13:50:06.221009 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 13:50:06.221029 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 13:50:06.221048 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 13:50:06.221069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 13:50:06.221089 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 13:50:06.221110 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 13:50:06.221143 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 13:50:06.221173 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 13:50:06.221196 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 13:50:06.221216 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 13:50:06.224644 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 13:50:06.224675 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 13:50:06.224697 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 13:50:06.224738 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 13:50:06.224761 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 13:50:06.224782 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 13:50:06.224802 systemd[1]: Reached target slices.target - Slice Units. Jan 15 13:50:06.224822 systemd[1]: Reached target swap.target - Swaps. Jan 15 13:50:06.224851 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 13:50:06.224873 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 13:50:06.224893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 13:50:06.224927 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 13:50:06.224969 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 13:50:06.224992 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 13:50:06.225013 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 13:50:06.225033 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 13:50:06.225053 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 13:50:06.225073 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:50:06.225106 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 13:50:06.225129 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 13:50:06.225148 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 13:50:06.225170 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 13:50:06.225190 systemd[1]: Reached target machines.target - Containers. Jan 15 13:50:06.225210 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 13:50:06.227058 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 13:50:06.227089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 13:50:06.227128 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 13:50:06.227151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 13:50:06.227171 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 13:50:06.227192 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 13:50:06.227212 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 13:50:06.227415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 13:50:06.227444 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 13:50:06.227465 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 13:50:06.227511 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 13:50:06.227534 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 13:50:06.227555 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 13:50:06.227587 kernel: fuse: init (API version 7.39) Jan 15 13:50:06.227610 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 13:50:06.227630 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 13:50:06.227650 kernel: loop: module loaded Jan 15 13:50:06.227670 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 13:50:06.227690 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 13:50:06.227723 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 13:50:06.227746 systemd[1]: verity-setup.service: Deactivated successfully. Jan 15 13:50:06.227766 systemd[1]: Stopped verity-setup.service. Jan 15 13:50:06.227786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:50:06.227805 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 13:50:06.227825 kernel: ACPI: bus type drm_connector registered Jan 15 13:50:06.227857 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 13:50:06.227880 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 13:50:06.227900 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 13:50:06.227919 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 13:50:06.227940 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 13:50:06.227962 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 13:50:06.227982 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 13:50:06.228002 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 13:50:06.228036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 13:50:06.228059 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 13:50:06.228080 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 13:50:06.228100 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 13:50:06.228132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 13:50:06.228166 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 13:50:06.228189 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 13:50:06.228209 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 13:50:06.231143 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 13:50:06.231196 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 13:50:06.231220 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 13:50:06.231297 systemd-journald[1143]: Collecting audit messages is disabled. Jan 15 13:50:06.231351 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 13:50:06.231376 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 13:50:06.231397 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 13:50:06.231417 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 13:50:06.231438 systemd-journald[1143]: Journal started Jan 15 13:50:06.231480 systemd-journald[1143]: Runtime Journal (/run/log/journal/c97927b055ce4227aeb81132b599d380) is 4.7M, max 38.0M, 33.2M free. Jan 15 13:50:05.793758 systemd[1]: Queued start job for default target multi-user.target. Jan 15 13:50:05.816294 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 15 13:50:05.816975 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 13:50:06.240619 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 13:50:06.252255 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 13:50:06.257251 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 13:50:06.259368 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 13:50:06.263371 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 15 13:50:06.271274 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 13:50:06.285197 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 13:50:06.285338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 13:50:06.295265 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 13:50:06.300607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 13:50:06.307258 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 13:50:06.312255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 13:50:06.327451 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 13:50:06.336321 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 13:50:06.350622 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 13:50:06.350708 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 13:50:06.354128 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 13:50:06.355204 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 13:50:06.356705 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 13:50:06.358323 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 13:50:06.388775 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 13:50:06.400804 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 13:50:06.414541 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 13:50:06.425440 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 15 13:50:06.428875 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 15 13:50:06.434940 kernel: loop0: detected capacity change from 0 to 211296 Jan 15 13:50:06.474976 systemd-journald[1143]: Time spent on flushing to /var/log/journal/c97927b055ce4227aeb81132b599d380 is 49.936ms for 1146 entries. Jan 15 13:50:06.474976 systemd-journald[1143]: System Journal (/var/log/journal/c97927b055ce4227aeb81132b599d380) is 8.0M, max 584.8M, 576.8M free. Jan 15 13:50:06.539822 systemd-journald[1143]: Received client request to flush runtime journal. Jan 15 13:50:06.539881 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 13:50:06.539910 kernel: loop1: detected capacity change from 0 to 8 Jan 15 13:50:06.486812 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 13:50:06.498222 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 13:50:06.501353 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 15 13:50:06.508889 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 13:50:06.520540 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 13:50:06.527541 udevadm[1197]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 15 13:50:06.543530 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 13:50:06.570325 kernel: loop2: detected capacity change from 0 to 142488 Jan 15 13:50:06.579315 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 15 13:50:06.579341 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 15 13:50:06.598519 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 13:50:06.633266 kernel: loop3: detected capacity change from 0 to 140768 Jan 15 13:50:06.696091 kernel: loop4: detected capacity change from 0 to 211296 Jan 15 13:50:06.725604 kernel: loop5: detected capacity change from 0 to 8 Jan 15 13:50:06.734250 kernel: loop6: detected capacity change from 0 to 142488 Jan 15 13:50:06.757268 kernel: loop7: detected capacity change from 0 to 140768 Jan 15 13:50:06.786350 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 15 13:50:06.787280 (sd-merge)[1213]: Merged extensions into '/usr'. Jan 15 13:50:06.796405 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 13:50:06.796443 systemd[1]: Reloading... Jan 15 13:50:06.930266 zram_generator::config[1239]: No configuration found. Jan 15 13:50:07.068580 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 13:50:07.175207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 13:50:07.242020 systemd[1]: Reloading finished in 444 ms. Jan 15 13:50:07.278502 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 13:50:07.282064 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 13:50:07.292495 systemd[1]: Starting ensure-sysext.service... Jan 15 13:50:07.295493 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 13:50:07.317388 systemd[1]: Reloading requested from client PID 1295 ('systemctl') (unit ensure-sysext.service)... Jan 15 13:50:07.317422 systemd[1]: Reloading... Jan 15 13:50:07.329676 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 13:50:07.330339 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 13:50:07.331825 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 13:50:07.332257 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 15 13:50:07.332378 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 15 13:50:07.337108 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 13:50:07.337126 systemd-tmpfiles[1296]: Skipping /boot Jan 15 13:50:07.353205 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 13:50:07.353239 systemd-tmpfiles[1296]: Skipping /boot Jan 15 13:50:07.414260 zram_generator::config[1326]: No configuration found. Jan 15 13:50:07.586137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 13:50:07.652712 systemd[1]: Reloading finished in 334 ms. Jan 15 13:50:07.674116 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 13:50:07.680830 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 13:50:07.694477 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 13:50:07.701451 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 13:50:07.711501 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 13:50:07.720359 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 13:50:07.731492 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 13:50:07.735647 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 13:50:07.749640 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 13:50:07.753944 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:50:07.754776 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 13:50:07.762681 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 13:50:07.766559 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 13:50:07.774609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 13:50:07.775713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 13:50:07.775965 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:50:07.779411 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:50:07.780596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 13:50:07.780823 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 13:50:07.780953 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:50:07.786596 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:50:07.786925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 13:50:07.789883 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 13:50:07.791390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 13:50:07.791659 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:50:07.793996 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 13:50:07.808550 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 13:50:07.811601 systemd[1]: Finished ensure-sysext.service. Jan 15 13:50:07.821865 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 13:50:07.822556 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 13:50:07.839191 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 15 13:50:07.847468 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 13:50:07.849718 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 13:50:07.851661 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 13:50:07.852512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 13:50:07.855597 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 13:50:07.855693 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 13:50:07.865318 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 13:50:07.866395 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 13:50:07.886956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 13:50:07.887278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 13:50:07.889724 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 13:50:07.897341 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 13:50:07.900961 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Jan 15 13:50:07.901706 augenrules[1417]: No rules Jan 15 13:50:07.903214 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 13:50:07.912088 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 13:50:07.953434 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 13:50:07.965465 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 13:50:08.046481 systemd-resolved[1389]: Positive Trust Anchors: Jan 15 13:50:08.046511 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 13:50:08.046556 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 13:50:08.065968 systemd-resolved[1389]: Using system hostname 'srv-e1jz5.gb1.brightbox.com'. Jan 15 13:50:08.072635 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 13:50:08.073926 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 13:50:08.093135 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 15 13:50:08.094053 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 13:50:08.126361 systemd-networkd[1432]: lo: Link UP Jan 15 13:50:08.126373 systemd-networkd[1432]: lo: Gained carrier Jan 15 13:50:08.138074 systemd-networkd[1432]: Enumeration completed Jan 15 13:50:08.138340 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 13:50:08.144140 systemd[1]: Reached target network.target - Network. Jan 15 13:50:08.154464 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 13:50:08.174723 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 15 13:50:08.213269 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1433) Jan 15 13:50:08.269087 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 13:50:08.269332 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 13:50:08.271161 systemd-networkd[1432]: eth0: Link UP Jan 15 13:50:08.272009 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 13:50:08.272530 systemd-networkd[1432]: eth0: Gained carrier Jan 15 13:50:08.272633 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 13:50:08.280716 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 13:50:08.292367 systemd-networkd[1432]: eth0: DHCPv4 address 10.230.9.202/30, gateway 10.230.9.201 acquired from 10.230.9.201 Jan 15 13:50:08.295682 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Jan 15 13:50:08.312599 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 13:50:08.322258 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 15 13:50:08.328270 kernel: ACPI: button: Power Button [PWRF] Jan 15 13:50:08.340256 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 13:50:08.384273 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 15 13:50:08.391252 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 15 13:50:08.397970 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 15 13:50:08.398252 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 15 13:50:08.461582 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 13:50:08.621768 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 15 13:50:08.639832 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 15 13:50:08.707425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 13:50:08.728092 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 13:50:08.765020 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 15 13:50:08.766910 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 13:50:08.767828 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 13:50:08.768750 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 13:50:08.769759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 13:50:08.770980 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 13:50:08.771930 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 13:50:08.772746 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 13:50:08.773535 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 13:50:08.773594 systemd[1]: Reached target paths.target - Path Units. Jan 15 13:50:08.774308 systemd[1]: Reached target timers.target - Timer Units. Jan 15 13:50:08.776189 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 13:50:08.778999 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 13:50:08.785580 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 13:50:08.788110 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 15 13:50:08.789612 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 13:50:08.790485 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 13:50:08.791166 systemd[1]: Reached target basic.target - Basic System. Jan 15 13:50:08.791900 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 13:50:08.791952 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 13:50:08.795422 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 13:50:08.799462 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 13:50:08.802856 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 13:50:08.807442 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 13:50:08.816330 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 13:50:08.829322 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 13:50:08.831326 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 13:50:08.834196 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 13:50:08.842348 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 13:50:08.849559 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 13:50:08.854459 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 13:50:08.857526 jq[1477]: false Jan 15 13:50:08.866485 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 13:50:08.869101 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 13:50:08.870817 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 13:50:08.878862 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 13:50:08.882536 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 13:50:08.887747 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 15 13:50:08.895968 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 13:50:08.897183 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 13:50:08.903098 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 13:50:08.904507 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 13:50:08.920754 jq[1494]: true Jan 15 13:50:08.930897 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 13:50:08.931196 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 13:50:08.952210 dbus-daemon[1476]: [system] SELinux support is enabled Jan 15 13:50:08.953790 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 13:50:08.956076 dbus-daemon[1476]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1432 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 15 13:50:08.960852 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 13:50:08.961802 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 13:50:08.960905 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 13:50:08.961754 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 13:50:08.961784 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 13:50:08.974257 extend-filesystems[1478]: Found loop4 Jan 15 13:50:08.979594 extend-filesystems[1478]: Found loop5 Jan 15 13:50:08.979594 extend-filesystems[1478]: Found loop6 Jan 15 13:50:08.979594 extend-filesystems[1478]: Found loop7 Jan 15 13:50:08.979594 extend-filesystems[1478]: Found vda Jan 15 13:50:08.979594 extend-filesystems[1478]: Found vda1 Jan 15 13:50:08.979594 extend-filesystems[1478]: Found vda2 Jan 15 13:50:09.005969 jq[1504]: true Jan 15 13:50:08.978810 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 15 13:50:09.006218 tar[1496]: linux-amd64/helm Jan 15 13:50:09.019810 extend-filesystems[1478]: Found vda3 Jan 15 13:50:09.019810 extend-filesystems[1478]: Found usr Jan 15 13:50:09.019810 extend-filesystems[1478]: Found vda4 Jan 15 13:50:09.019810 extend-filesystems[1478]: Found vda6 Jan 15 13:50:09.019810 extend-filesystems[1478]: Found vda7 Jan 15 13:50:09.019810 extend-filesystems[1478]: Found vda9 Jan 15 13:50:09.019810 extend-filesystems[1478]: Checking size of /dev/vda9 Jan 15 13:50:09.043341 update_engine[1489]: I20250115 13:50:08.986577 1489 main.cc:92] Flatcar Update Engine starting Jan 15 13:50:09.043341 update_engine[1489]: I20250115 13:50:08.998435 1489 update_check_scheduler.cc:74] Next update check in 5m3s Jan 15 13:50:08.981864 (ntainerd)[1506]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 13:50:09.048447 extend-filesystems[1478]: Resized partition /dev/vda9 Jan 15 13:50:08.999333 systemd[1]: Started update-engine.service - Update Engine. Jan 15 13:50:09.009982 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 13:50:09.049560 extend-filesystems[1520]: resize2fs 1.47.1 (20-May-2024) Jan 15 13:50:09.019511 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 13:50:09.070262 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 15 13:50:09.197255 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1445) Jan 15 13:50:09.232956 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Jan 15 13:50:09.233012 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 13:50:09.241764 systemd-logind[1487]: New seat seat0. Jan 15 13:50:09.243033 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 13:50:09.281848 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Jan 15 13:50:09.298321 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 13:50:09.316659 systemd[1]: Starting sshkeys.service... Jan 15 13:50:09.382688 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 15 13:50:09.382981 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 15 13:50:09.387861 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1510 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 15 13:50:09.401155 systemd[1]: Starting polkit.service - Authorization Manager... Jan 15 13:50:09.426107 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 15 13:50:09.436923 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 15 13:50:09.449306 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 15 13:50:09.468335 polkitd[1545]: Started polkitd version 121 Jan 15 13:50:09.481758 extend-filesystems[1520]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 15 13:50:09.481758 extend-filesystems[1520]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 15 13:50:09.481758 extend-filesystems[1520]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 15 13:50:09.481715 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 13:50:09.489317 polkitd[1545]: Loading rules from directory /etc/polkit-1/rules.d Jan 15 13:50:09.490366 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Jan 15 13:50:09.482058 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 13:50:09.489423 polkitd[1545]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 15 13:50:09.484729 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 13:50:09.494018 polkitd[1545]: Finished loading, compiling and executing 2 rules Jan 15 13:50:09.497663 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 15 13:50:09.499260 systemd[1]: Started polkit.service - Authorization Manager. Jan 15 13:50:09.504174 polkitd[1545]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 15 13:50:09.537168 systemd-hostnamed[1510]: Hostname set to (static) Jan 15 13:50:09.568168 containerd[1506]: time="2025-01-15T13:50:09.567634674Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 15 13:50:09.632602 containerd[1506]: time="2025-01-15T13:50:09.632261608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 15 13:50:09.636748 containerd[1506]: time="2025-01-15T13:50:09.636584467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 15 13:50:09.636748 containerd[1506]: time="2025-01-15T13:50:09.636628163Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 15 13:50:09.636748 containerd[1506]: time="2025-01-15T13:50:09.636653883Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 15 13:50:09.637255 containerd[1506]: time="2025-01-15T13:50:09.636941289Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 15 13:50:09.637255 containerd[1506]: time="2025-01-15T13:50:09.636981923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 15 13:50:09.637255 containerd[1506]: time="2025-01-15T13:50:09.637085643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 13:50:09.637255 containerd[1506]: time="2025-01-15T13:50:09.637111852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 15 13:50:09.637429 containerd[1506]: time="2025-01-15T13:50:09.637399170Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 13:50:09.637429 containerd[1506]: time="2025-01-15T13:50:09.637424070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 15 13:50:09.637533 containerd[1506]: time="2025-01-15T13:50:09.637444718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 13:50:09.637533 containerd[1506]: time="2025-01-15T13:50:09.637460697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 15 13:50:09.637606 containerd[1506]: time="2025-01-15T13:50:09.637581146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 15 13:50:09.638559 containerd[1506]: time="2025-01-15T13:50:09.637967218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 15 13:50:09.638559 containerd[1506]: time="2025-01-15T13:50:09.638111252Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 13:50:09.638559 containerd[1506]: time="2025-01-15T13:50:09.638134855Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 15 13:50:09.640363 containerd[1506]: time="2025-01-15T13:50:09.640333002Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 15 13:50:09.640468 containerd[1506]: time="2025-01-15T13:50:09.640441786Z" level=info msg="metadata content store policy set" policy=shared Jan 15 13:50:09.646253 containerd[1506]: time="2025-01-15T13:50:09.646027755Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 15 13:50:09.646253 containerd[1506]: time="2025-01-15T13:50:09.646126631Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 15 13:50:09.646253 containerd[1506]: time="2025-01-15T13:50:09.646157475Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 15 13:50:09.646253 containerd[1506]: time="2025-01-15T13:50:09.646221660Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 15 13:50:09.646455 containerd[1506]: time="2025-01-15T13:50:09.646280675Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 15 13:50:09.646505 containerd[1506]: time="2025-01-15T13:50:09.646488233Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 15 13:50:09.647032 containerd[1506]: time="2025-01-15T13:50:09.646799581Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 15 13:50:09.647032 containerd[1506]: time="2025-01-15T13:50:09.646991058Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 15 13:50:09.647032 containerd[1506]: time="2025-01-15T13:50:09.647018779Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 15 13:50:09.647136 containerd[1506]: time="2025-01-15T13:50:09.647039442Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 15 13:50:09.647136 containerd[1506]: time="2025-01-15T13:50:09.647061241Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 15 13:50:09.647136 containerd[1506]: time="2025-01-15T13:50:09.647093167Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 15 13:50:09.647136 containerd[1506]: time="2025-01-15T13:50:09.647123297Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 15 13:50:09.647750 containerd[1506]: time="2025-01-15T13:50:09.647146344Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 15 13:50:09.647750 containerd[1506]: time="2025-01-15T13:50:09.647171738Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 15 13:50:09.647750 containerd[1506]: time="2025-01-15T13:50:09.647192998Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 15 13:50:09.647750 containerd[1506]: time="2025-01-15T13:50:09.647211920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 15 13:50:09.647750 containerd[1506]: time="2025-01-15T13:50:09.647251885Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 15 13:50:09.647750 containerd[1506]: time="2025-01-15T13:50:09.647284350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.647750 containerd[1506]: time="2025-01-15T13:50:09.647313664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654348596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654435992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654496526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654527832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654553088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654574829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654600684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654629259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654654627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654678301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654704347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654787027Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 15 13:50:09.654831 containerd[1506]: time="2025-01-15T13:50:09.654840749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.655345 containerd[1506]: time="2025-01-15T13:50:09.654870749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.655345 containerd[1506]: time="2025-01-15T13:50:09.654890460Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 15 13:50:09.655345 containerd[1506]: time="2025-01-15T13:50:09.654972309Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 15 13:50:09.655345 containerd[1506]: time="2025-01-15T13:50:09.655017547Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 15 13:50:09.655345 containerd[1506]: time="2025-01-15T13:50:09.655044817Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 15 13:50:09.655345 containerd[1506]: time="2025-01-15T13:50:09.655071467Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 15 13:50:09.655345 containerd[1506]: time="2025-01-15T13:50:09.655099326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.655345 containerd[1506]: time="2025-01-15T13:50:09.655124189Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 15 13:50:09.655345 containerd[1506]: time="2025-01-15T13:50:09.655153941Z" level=info msg="NRI interface is disabled by configuration." Jan 15 13:50:09.655345 containerd[1506]: time="2025-01-15T13:50:09.655174321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 15 13:50:09.655696 containerd[1506]: time="2025-01-15T13:50:09.655610481Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 15 13:50:09.655966 containerd[1506]: time="2025-01-15T13:50:09.655702465Z" level=info msg="Connect containerd service" Jan 15 13:50:09.655966 containerd[1506]: time="2025-01-15T13:50:09.655785404Z" level=info msg="using legacy CRI server" Jan 15 13:50:09.655966 containerd[1506]: time="2025-01-15T13:50:09.655802419Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 13:50:09.656088 containerd[1506]: time="2025-01-15T13:50:09.655965669Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 15 13:50:09.663821 containerd[1506]: time="2025-01-15T13:50:09.661833443Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 13:50:09.663821 containerd[1506]: time="2025-01-15T13:50:09.662402133Z" level=info msg="Start subscribing containerd event" Jan 15 13:50:09.663821 containerd[1506]: time="2025-01-15T13:50:09.662489625Z" level=info msg="Start recovering state" Jan 15 13:50:09.663821 containerd[1506]: time="2025-01-15T13:50:09.662602420Z" level=info msg="Start event monitor" Jan 15 13:50:09.663821 containerd[1506]: time="2025-01-15T13:50:09.662649200Z" level=info msg="Start snapshots syncer" Jan 15 13:50:09.663821 containerd[1506]: time="2025-01-15T13:50:09.662673756Z" level=info msg="Start cni network conf syncer for default" Jan 15 13:50:09.663821 containerd[1506]: time="2025-01-15T13:50:09.662689091Z" level=info msg="Start streaming server" Jan 15 13:50:09.664741 containerd[1506]: time="2025-01-15T13:50:09.663796820Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 13:50:09.664741 containerd[1506]: time="2025-01-15T13:50:09.664596065Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 13:50:09.665142 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 13:50:09.667993 containerd[1506]: time="2025-01-15T13:50:09.667437086Z" level=info msg="containerd successfully booted in 0.103500s" Jan 15 13:50:09.708074 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 13:50:09.740020 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 13:50:09.749792 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 13:50:09.755697 systemd[1]: Started sshd@0-10.230.9.202:22-147.75.109.163:47440.service - OpenSSH per-connection server daemon (147.75.109.163:47440). Jan 15 13:50:09.771939 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 13:50:09.772403 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 13:50:09.781759 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 13:50:09.809301 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 13:50:09.821159 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 13:50:09.833757 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 13:50:09.834888 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 13:50:09.908678 systemd-networkd[1432]: eth0: Gained IPv6LL Jan 15 13:50:09.911554 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Jan 15 13:50:09.912462 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 13:50:09.916099 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 13:50:09.926603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:50:09.930770 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 13:50:09.978567 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 13:50:09.979644 tar[1496]: linux-amd64/LICENSE Jan 15 13:50:09.980779 tar[1496]: linux-amd64/README.md Jan 15 13:50:09.995490 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 13:50:10.688178 sshd[1574]: Accepted publickey for core from 147.75.109.163 port 47440 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:50:10.691953 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:50:10.710715 systemd-logind[1487]: New session 1 of user core. Jan 15 13:50:10.711981 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 13:50:10.721291 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 13:50:10.743931 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 13:50:10.755197 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 13:50:10.762466 (systemd)[1601]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 13:50:10.796443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:50:10.804095 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 13:50:10.853316 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Jan 15 13:50:10.854863 systemd-networkd[1432]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8272:24:19ff:fee6:9ca/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8272:24:19ff:fee6:9ca/64 assigned by NDisc. Jan 15 13:50:10.854876 systemd-networkd[1432]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 15 13:50:10.897433 systemd[1601]: Queued start job for default target default.target. Jan 15 13:50:10.919402 systemd[1601]: Created slice app.slice - User Application Slice. Jan 15 13:50:10.919447 systemd[1601]: Reached target paths.target - Paths. Jan 15 13:50:10.919471 systemd[1601]: Reached target timers.target - Timers. Jan 15 13:50:10.923398 systemd[1601]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 13:50:10.951492 systemd[1601]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 13:50:10.951708 systemd[1601]: Reached target sockets.target - Sockets. Jan 15 13:50:10.951735 systemd[1601]: Reached target basic.target - Basic System. Jan 15 13:50:10.951809 systemd[1601]: Reached target default.target - Main User Target. Jan 15 13:50:10.951875 systemd[1601]: Startup finished in 180ms. Jan 15 13:50:10.951886 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 13:50:10.962720 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 13:50:11.542663 kubelet[1611]: E0115 13:50:11.542471 1611 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 13:50:11.544682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:50:11.544951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 13:50:11.545497 systemd[1]: kubelet.service: Consumed 1.058s CPU time. Jan 15 13:50:11.602695 systemd[1]: Started sshd@1-10.230.9.202:22-147.75.109.163:47602.service - OpenSSH per-connection server daemon (147.75.109.163:47602). Jan 15 13:50:12.491272 sshd[1626]: Accepted publickey for core from 147.75.109.163 port 47602 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:50:12.492705 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:50:12.505613 systemd-logind[1487]: New session 2 of user core. Jan 15 13:50:12.510652 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 13:50:12.790748 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Jan 15 13:50:13.110344 sshd[1626]: pam_unix(sshd:session): session closed for user core Jan 15 13:50:13.115525 systemd[1]: sshd@1-10.230.9.202:22-147.75.109.163:47602.service: Deactivated successfully. Jan 15 13:50:13.117867 systemd[1]: session-2.scope: Deactivated successfully. Jan 15 13:50:13.119009 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Jan 15 13:50:13.120770 systemd-logind[1487]: Removed session 2. Jan 15 13:50:13.274047 systemd[1]: Started sshd@2-10.230.9.202:22-147.75.109.163:47616.service - OpenSSH per-connection server daemon (147.75.109.163:47616). Jan 15 13:50:14.166076 sshd[1635]: Accepted publickey for core from 147.75.109.163 port 47616 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:50:14.168705 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:50:14.176444 systemd-logind[1487]: New session 3 of user core. Jan 15 13:50:14.191627 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 13:50:14.787211 sshd[1635]: pam_unix(sshd:session): session closed for user core Jan 15 13:50:14.793148 systemd[1]: sshd@2-10.230.9.202:22-147.75.109.163:47616.service: Deactivated successfully. Jan 15 13:50:14.795953 systemd[1]: session-3.scope: Deactivated successfully. Jan 15 13:50:14.797130 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Jan 15 13:50:14.798703 systemd-logind[1487]: Removed session 3. Jan 15 13:50:14.886381 login[1582]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 15 13:50:14.893528 systemd-logind[1487]: New session 4 of user core. Jan 15 13:50:14.893691 login[1581]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 15 13:50:14.906538 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 13:50:14.915468 systemd-logind[1487]: New session 5 of user core. Jan 15 13:50:14.926880 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 13:50:15.944132 coreos-metadata[1475]: Jan 15 13:50:15.943 WARN failed to locate config-drive, using the metadata service API instead Jan 15 13:50:15.978925 coreos-metadata[1475]: Jan 15 13:50:15.978 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 15 13:50:15.985188 coreos-metadata[1475]: Jan 15 13:50:15.985 INFO Fetch failed with 404: resource not found Jan 15 13:50:15.985188 coreos-metadata[1475]: Jan 15 13:50:15.985 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 15 13:50:15.985650 coreos-metadata[1475]: Jan 15 13:50:15.985 INFO Fetch successful Jan 15 13:50:15.985813 coreos-metadata[1475]: Jan 15 13:50:15.985 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 15 13:50:15.996937 coreos-metadata[1475]: Jan 15 13:50:15.996 INFO Fetch successful Jan 15 13:50:15.996937 coreos-metadata[1475]: Jan 15 13:50:15.996 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 15 13:50:16.011905 coreos-metadata[1475]: Jan 15 13:50:16.011 INFO Fetch successful Jan 15 13:50:16.012023 coreos-metadata[1475]: Jan 15 13:50:16.011 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 15 13:50:16.026532 coreos-metadata[1475]: Jan 15 13:50:16.026 INFO Fetch successful Jan 15 13:50:16.026805 coreos-metadata[1475]: Jan 15 13:50:16.026 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 15 13:50:16.046507 coreos-metadata[1475]: Jan 15 13:50:16.046 INFO Fetch successful Jan 15 13:50:16.088338 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 13:50:16.089440 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 13:50:16.523911 coreos-metadata[1548]: Jan 15 13:50:16.523 WARN failed to locate config-drive, using the metadata service API instead Jan 15 13:50:16.546173 coreos-metadata[1548]: Jan 15 13:50:16.546 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 15 13:50:16.570797 coreos-metadata[1548]: Jan 15 13:50:16.570 INFO Fetch successful Jan 15 13:50:16.570955 coreos-metadata[1548]: Jan 15 13:50:16.570 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 15 13:50:16.607020 coreos-metadata[1548]: Jan 15 13:50:16.606 INFO Fetch successful Jan 15 13:50:16.609515 unknown[1548]: wrote ssh authorized keys file for user: core Jan 15 13:50:16.648057 update-ssh-keys[1676]: Updated "/home/core/.ssh/authorized_keys" Jan 15 13:50:16.649717 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 15 13:50:16.653431 systemd[1]: Finished sshkeys.service. Jan 15 13:50:16.655460 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 13:50:16.655691 systemd[1]: Startup finished in 1.491s (kernel) + 15.268s (initrd) + 11.673s (userspace) = 28.433s. Jan 15 13:50:21.597408 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 13:50:21.604504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:50:21.754476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:50:21.764714 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 13:50:21.851059 kubelet[1688]: E0115 13:50:21.850860 1688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 13:50:21.855385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:50:21.855617 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 13:50:24.943725 systemd[1]: Started sshd@3-10.230.9.202:22-147.75.109.163:36870.service - OpenSSH per-connection server daemon (147.75.109.163:36870). Jan 15 13:50:25.841898 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 36870 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:50:25.844014 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:50:25.852524 systemd-logind[1487]: New session 6 of user core. Jan 15 13:50:25.859442 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 13:50:26.459790 sshd[1698]: pam_unix(sshd:session): session closed for user core Jan 15 13:50:26.464148 systemd[1]: sshd@3-10.230.9.202:22-147.75.109.163:36870.service: Deactivated successfully. Jan 15 13:50:26.466821 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 13:50:26.469096 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Jan 15 13:50:26.470548 systemd-logind[1487]: Removed session 6. Jan 15 13:50:26.621604 systemd[1]: Started sshd@4-10.230.9.202:22-147.75.109.163:36876.service - OpenSSH per-connection server daemon (147.75.109.163:36876). Jan 15 13:50:27.503745 sshd[1705]: Accepted publickey for core from 147.75.109.163 port 36876 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:50:27.505784 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:50:27.513511 systemd-logind[1487]: New session 7 of user core. Jan 15 13:50:27.523418 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 13:50:28.116042 sshd[1705]: pam_unix(sshd:session): session closed for user core Jan 15 13:50:28.121579 systemd[1]: sshd@4-10.230.9.202:22-147.75.109.163:36876.service: Deactivated successfully. Jan 15 13:50:28.124136 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 13:50:28.125303 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Jan 15 13:50:28.126860 systemd-logind[1487]: Removed session 7. Jan 15 13:50:28.281682 systemd[1]: Started sshd@5-10.230.9.202:22-147.75.109.163:41858.service - OpenSSH per-connection server daemon (147.75.109.163:41858). Jan 15 13:50:29.210075 sshd[1712]: Accepted publickey for core from 147.75.109.163 port 41858 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:50:29.212358 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:50:29.220052 systemd-logind[1487]: New session 8 of user core. Jan 15 13:50:29.226472 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 13:50:29.836955 sshd[1712]: pam_unix(sshd:session): session closed for user core Jan 15 13:50:29.841661 systemd[1]: sshd@5-10.230.9.202:22-147.75.109.163:41858.service: Deactivated successfully. Jan 15 13:50:29.844045 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 13:50:29.845516 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Jan 15 13:50:29.847236 systemd-logind[1487]: Removed session 8. Jan 15 13:50:30.002599 systemd[1]: Started sshd@6-10.230.9.202:22-147.75.109.163:41862.service - OpenSSH per-connection server daemon (147.75.109.163:41862). Jan 15 13:50:30.888810 sshd[1719]: Accepted publickey for core from 147.75.109.163 port 41862 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:50:30.890921 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:50:30.898278 systemd-logind[1487]: New session 9 of user core. Jan 15 13:50:30.905484 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 13:50:31.379038 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 13:50:31.379557 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 13:50:31.396583 sudo[1722]: pam_unix(sudo:session): session closed for user root Jan 15 13:50:31.540692 sshd[1719]: pam_unix(sshd:session): session closed for user core Jan 15 13:50:31.545194 systemd[1]: sshd@6-10.230.9.202:22-147.75.109.163:41862.service: Deactivated successfully. Jan 15 13:50:31.547729 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 13:50:31.549611 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Jan 15 13:50:31.551356 systemd-logind[1487]: Removed session 9. Jan 15 13:50:31.702584 systemd[1]: Started sshd@7-10.230.9.202:22-147.75.109.163:41864.service - OpenSSH per-connection server daemon (147.75.109.163:41864). Jan 15 13:50:32.097351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 13:50:32.108535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:50:32.259584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:50:32.277724 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 13:50:32.364397 kubelet[1737]: E0115 13:50:32.363964 1737 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 13:50:32.366845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:50:32.367081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 13:50:32.591364 sshd[1727]: Accepted publickey for core from 147.75.109.163 port 41864 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:50:32.593498 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:50:32.600847 systemd-logind[1487]: New session 10 of user core. Jan 15 13:50:32.608522 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 13:50:33.069922 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 13:50:33.070445 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 13:50:33.075493 sudo[1748]: pam_unix(sudo:session): session closed for user root Jan 15 13:50:33.083158 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 15 13:50:33.083628 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 13:50:33.114573 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 15 13:50:33.116548 auditctl[1751]: No rules Jan 15 13:50:33.117047 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 13:50:33.117342 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 15 13:50:33.120594 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 13:50:33.171532 augenrules[1769]: No rules Jan 15 13:50:33.173016 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 13:50:33.174560 sudo[1747]: pam_unix(sudo:session): session closed for user root Jan 15 13:50:33.318697 sshd[1727]: pam_unix(sshd:session): session closed for user core Jan 15 13:50:33.323871 systemd[1]: sshd@7-10.230.9.202:22-147.75.109.163:41864.service: Deactivated successfully. Jan 15 13:50:33.326341 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 13:50:33.327588 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Jan 15 13:50:33.328987 systemd-logind[1487]: Removed session 10. Jan 15 13:50:33.486856 systemd[1]: Started sshd@8-10.230.9.202:22-147.75.109.163:41878.service - OpenSSH per-connection server daemon (147.75.109.163:41878). Jan 15 13:50:34.373442 sshd[1777]: Accepted publickey for core from 147.75.109.163 port 41878 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:50:34.375391 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:50:34.382328 systemd-logind[1487]: New session 11 of user core. Jan 15 13:50:34.389462 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 13:50:34.852981 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 13:50:34.853484 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 13:50:35.338858 (dockerd)[1795]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 13:50:35.339001 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 13:50:35.777839 dockerd[1795]: time="2025-01-15T13:50:35.777726119Z" level=info msg="Starting up" Jan 15 13:50:35.893740 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport317685460-merged.mount: Deactivated successfully. Jan 15 13:50:35.943948 dockerd[1795]: time="2025-01-15T13:50:35.943686023Z" level=info msg="Loading containers: start." Jan 15 13:50:36.081344 kernel: Initializing XFRM netlink socket Jan 15 13:50:36.117669 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Jan 15 13:50:36.188459 systemd-networkd[1432]: docker0: Link UP Jan 15 13:50:36.212204 dockerd[1795]: time="2025-01-15T13:50:36.212126992Z" level=info msg="Loading containers: done." Jan 15 13:50:36.233684 dockerd[1795]: time="2025-01-15T13:50:36.233584508Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 13:50:36.233880 dockerd[1795]: time="2025-01-15T13:50:36.233828767Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 15 13:50:36.234056 dockerd[1795]: time="2025-01-15T13:50:36.234008039Z" level=info msg="Daemon has completed initialization" Jan 15 13:50:36.274989 dockerd[1795]: time="2025-01-15T13:50:36.274894974Z" level=info msg="API listen on /run/docker.sock" Jan 15 13:50:36.275128 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 13:50:36.853240 systemd-resolved[1389]: Clock change detected. Flushing caches. Jan 15 13:50:36.853966 systemd-timesyncd[1408]: Contacted time server [2a02:390:56d0:900d:e99::]:123 (2.flatcar.pool.ntp.org). Jan 15 13:50:36.854060 systemd-timesyncd[1408]: Initial clock synchronization to Wed 2025-01-15 13:50:36.852992 UTC. Jan 15 13:50:38.094914 containerd[1506]: time="2025-01-15T13:50:38.094804703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 15 13:50:38.863984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444687596.mount: Deactivated successfully. Jan 15 13:50:41.055673 containerd[1506]: time="2025-01-15T13:50:41.055575871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:41.057014 containerd[1506]: time="2025-01-15T13:50:41.056950094Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Jan 15 13:50:41.057913 containerd[1506]: time="2025-01-15T13:50:41.057880386Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:41.062083 containerd[1506]: time="2025-01-15T13:50:41.062004257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:41.063745 containerd[1506]: time="2025-01-15T13:50:41.063702570Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.968788905s" Jan 15 13:50:41.063822 containerd[1506]: time="2025-01-15T13:50:41.063764157Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 15 13:50:41.092341 containerd[1506]: time="2025-01-15T13:50:41.092018234Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 15 13:50:41.422913 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 15 13:50:43.113596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 13:50:43.126079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:50:43.502607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:50:43.506721 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 13:50:43.614865 kubelet[2014]: E0115 13:50:43.614681 2014 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 13:50:43.620487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:50:43.620767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 13:50:43.819644 containerd[1506]: time="2025-01-15T13:50:43.819452614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:43.821871 containerd[1506]: time="2025-01-15T13:50:43.821773638Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Jan 15 13:50:43.823236 containerd[1506]: time="2025-01-15T13:50:43.823164784Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:43.827237 containerd[1506]: time="2025-01-15T13:50:43.827155242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:43.829831 containerd[1506]: time="2025-01-15T13:50:43.829006831Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.736924951s" Jan 15 13:50:43.829831 containerd[1506]: time="2025-01-15T13:50:43.829070184Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 15 13:50:43.859435 containerd[1506]: time="2025-01-15T13:50:43.859360652Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 15 13:50:45.475915 containerd[1506]: time="2025-01-15T13:50:45.475584346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:45.477221 containerd[1506]: time="2025-01-15T13:50:45.477155958Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Jan 15 13:50:45.478272 containerd[1506]: time="2025-01-15T13:50:45.478211943Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:45.482245 containerd[1506]: time="2025-01-15T13:50:45.482207046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:45.484219 containerd[1506]: time="2025-01-15T13:50:45.483995091Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.624570566s" Jan 15 13:50:45.484219 containerd[1506]: time="2025-01-15T13:50:45.484056788Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 15 13:50:45.515794 containerd[1506]: time="2025-01-15T13:50:45.515380838Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 15 13:50:47.075157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2947589355.mount: Deactivated successfully. Jan 15 13:50:47.704541 containerd[1506]: time="2025-01-15T13:50:47.704452265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:47.706278 containerd[1506]: time="2025-01-15T13:50:47.706214892Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 15 13:50:47.707353 containerd[1506]: time="2025-01-15T13:50:47.707275318Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:47.710027 containerd[1506]: time="2025-01-15T13:50:47.709950542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:47.711696 containerd[1506]: time="2025-01-15T13:50:47.711260841Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.195807043s" Jan 15 13:50:47.711696 containerd[1506]: time="2025-01-15T13:50:47.711326906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 15 13:50:47.745959 containerd[1506]: time="2025-01-15T13:50:47.745887062Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 15 13:50:48.348996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3070027893.mount: Deactivated successfully. Jan 15 13:50:49.615877 containerd[1506]: time="2025-01-15T13:50:49.615746198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:49.617443 containerd[1506]: time="2025-01-15T13:50:49.617340875Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 15 13:50:49.618600 containerd[1506]: time="2025-01-15T13:50:49.618537152Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:49.622575 containerd[1506]: time="2025-01-15T13:50:49.622535575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:49.625082 containerd[1506]: time="2025-01-15T13:50:49.624444807Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.878489403s" Jan 15 13:50:49.625082 containerd[1506]: time="2025-01-15T13:50:49.624494815Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 15 13:50:49.657104 containerd[1506]: time="2025-01-15T13:50:49.656985849Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 15 13:50:50.501823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2553194336.mount: Deactivated successfully. Jan 15 13:50:50.507637 containerd[1506]: time="2025-01-15T13:50:50.507513477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:50.508712 containerd[1506]: time="2025-01-15T13:50:50.508632107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 15 13:50:50.509444 containerd[1506]: time="2025-01-15T13:50:50.509373732Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:50.512517 containerd[1506]: time="2025-01-15T13:50:50.512452740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:50.513967 containerd[1506]: time="2025-01-15T13:50:50.513783973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 856.740201ms" Jan 15 13:50:50.513967 containerd[1506]: time="2025-01-15T13:50:50.513829011Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 15 13:50:50.548184 containerd[1506]: time="2025-01-15T13:50:50.548076720Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 15 13:50:51.181275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116128564.mount: Deactivated successfully. Jan 15 13:50:53.864097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 15 13:50:53.870607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:50:54.136468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:50:54.137451 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 13:50:54.284851 kubelet[2153]: E0115 13:50:54.284685 2153 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 13:50:54.287223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:50:54.287606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 13:50:54.748716 update_engine[1489]: I20250115 13:50:54.748543 1489 update_attempter.cc:509] Updating boot flags... Jan 15 13:50:54.834053 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2173) Jan 15 13:50:54.972411 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2174) Jan 15 13:50:55.919784 systemd[1]: Started sshd@9-10.230.9.202:22-92.118.39.73:53114.service - OpenSSH per-connection server daemon (92.118.39.73:53114). Jan 15 13:50:56.179277 sshd[2181]: Invalid user sol from 92.118.39.73 port 53114 Jan 15 13:50:56.236226 sshd[2181]: Connection closed by invalid user sol 92.118.39.73 port 53114 [preauth] Jan 15 13:50:56.237705 systemd[1]: sshd@9-10.230.9.202:22-92.118.39.73:53114.service: Deactivated successfully. Jan 15 13:50:56.360063 containerd[1506]: time="2025-01-15T13:50:56.359987036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:56.361865 containerd[1506]: time="2025-01-15T13:50:56.361806419Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 15 13:50:56.378698 containerd[1506]: time="2025-01-15T13:50:56.378638655Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:56.383633 containerd[1506]: time="2025-01-15T13:50:56.383542907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:50:56.385748 containerd[1506]: time="2025-01-15T13:50:56.385512886Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.837377162s" Jan 15 13:50:56.385748 containerd[1506]: time="2025-01-15T13:50:56.385568391Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 15 13:51:00.947485 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:51:00.962706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:51:01.004992 systemd[1]: Reloading requested from client PID 2251 ('systemctl') (unit session-11.scope)... Jan 15 13:51:01.005290 systemd[1]: Reloading... Jan 15 13:51:01.235931 zram_generator::config[2293]: No configuration found. Jan 15 13:51:01.355561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 13:51:01.462501 systemd[1]: Reloading finished in 456 ms. Jan 15 13:51:01.546019 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 13:51:01.546449 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 13:51:01.547030 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:51:01.562807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:51:01.704474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:51:01.718806 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 13:51:01.822718 kubelet[2358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 13:51:01.823525 kubelet[2358]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 13:51:01.823525 kubelet[2358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 13:51:01.827002 kubelet[2358]: I0115 13:51:01.825603 2358 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 13:51:02.899328 kubelet[2358]: I0115 13:51:02.897611 2358 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 15 13:51:02.899328 kubelet[2358]: I0115 13:51:02.897672 2358 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 13:51:02.899328 kubelet[2358]: I0115 13:51:02.898065 2358 server.go:919] "Client rotation is on, will bootstrap in background" Jan 15 13:51:02.972392 kubelet[2358]: E0115 13:51:02.972265 2358 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.9.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:02.972965 kubelet[2358]: I0115 13:51:02.972746 2358 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 13:51:02.990194 kubelet[2358]: I0115 13:51:02.990164 2358 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 13:51:02.992283 kubelet[2358]: I0115 13:51:02.992244 2358 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 13:51:02.993580 kubelet[2358]: I0115 13:51:02.993513 2358 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 13:51:02.994172 kubelet[2358]: I0115 13:51:02.994134 2358 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 13:51:02.994172 kubelet[2358]: I0115 13:51:02.994168 2358 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 13:51:02.994409 kubelet[2358]: I0115 13:51:02.994368 2358 state_mem.go:36] "Initialized new in-memory state store" Jan 15 13:51:02.994617 kubelet[2358]: I0115 13:51:02.994594 2358 kubelet.go:396] "Attempting to sync node with API server" Jan 15 13:51:02.994715 kubelet[2358]: I0115 13:51:02.994639 2358 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 13:51:02.995671 kubelet[2358]: I0115 13:51:02.995370 2358 kubelet.go:312] "Adding apiserver pod source" Jan 15 13:51:02.995671 kubelet[2358]: I0115 13:51:02.995416 2358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 13:51:02.995671 kubelet[2358]: W0115 13:51:02.995429 2358 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.9.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-e1jz5.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:02.995671 kubelet[2358]: E0115 13:51:02.995494 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.9.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-e1jz5.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:02.997900 kubelet[2358]: W0115 13:51:02.997795 2358 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.9.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:02.997900 kubelet[2358]: E0115 13:51:02.997857 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.9.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:02.998893 kubelet[2358]: I0115 13:51:02.998500 2358 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 13:51:03.003337 kubelet[2358]: I0115 13:51:03.003295 2358 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 13:51:03.005216 kubelet[2358]: W0115 13:51:03.004764 2358 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 13:51:03.006106 kubelet[2358]: I0115 13:51:03.005786 2358 server.go:1256] "Started kubelet" Jan 15 13:51:03.008975 kubelet[2358]: I0115 13:51:03.008773 2358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 13:51:03.016761 kubelet[2358]: E0115 13:51:03.016511 2358 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.9.202:6443/api/v1/namespaces/default/events\": dial tcp 10.230.9.202:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-e1jz5.gb1.brightbox.com.181ae1f85567c8df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-e1jz5.gb1.brightbox.com,UID:srv-e1jz5.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-e1jz5.gb1.brightbox.com,},FirstTimestamp:2025-01-15 13:51:03.005751519 +0000 UTC m=+1.282039829,LastTimestamp:2025-01-15 13:51:03.005751519 +0000 UTC m=+1.282039829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-e1jz5.gb1.brightbox.com,}" Jan 15 13:51:03.019325 kubelet[2358]: I0115 13:51:03.017518 2358 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 13:51:03.019325 kubelet[2358]: I0115 13:51:03.018247 2358 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 13:51:03.019325 kubelet[2358]: I0115 13:51:03.018636 2358 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 13:51:03.019325 kubelet[2358]: I0115 13:51:03.018821 2358 server.go:461] "Adding debug handlers to kubelet server" Jan 15 13:51:03.020215 kubelet[2358]: I0115 13:51:03.020188 2358 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 13:51:03.024640 kubelet[2358]: I0115 13:51:03.024607 2358 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 15 13:51:03.024739 kubelet[2358]: E0115 13:51:03.024616 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.9.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-e1jz5.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.9.202:6443: connect: connection refused" interval="200ms" Jan 15 13:51:03.024854 kubelet[2358]: I0115 13:51:03.024715 2358 reconciler_new.go:29] "Reconciler: start to sync state" Jan 15 13:51:03.027110 kubelet[2358]: I0115 13:51:03.027087 2358 factory.go:221] Registration of the systemd container factory successfully Jan 15 13:51:03.027380 kubelet[2358]: I0115 13:51:03.027353 2358 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 13:51:03.029919 kubelet[2358]: W0115 13:51:03.029876 2358 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.9.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:03.030657 kubelet[2358]: E0115 13:51:03.030633 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.9.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:03.031132 kubelet[2358]: I0115 13:51:03.031106 2358 factory.go:221] Registration of the containerd container factory successfully Jan 15 13:51:03.042104 kubelet[2358]: I0115 13:51:03.042067 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 13:51:03.047013 kubelet[2358]: E0115 13:51:03.046974 2358 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 13:51:03.055875 kubelet[2358]: I0115 13:51:03.049865 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 13:51:03.058143 kubelet[2358]: I0115 13:51:03.058118 2358 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 13:51:03.058335 kubelet[2358]: I0115 13:51:03.058288 2358 kubelet.go:2329] "Starting kubelet main sync loop" Jan 15 13:51:03.058586 kubelet[2358]: E0115 13:51:03.058546 2358 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 13:51:03.060808 kubelet[2358]: W0115 13:51:03.060744 2358 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.9.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:03.061153 kubelet[2358]: E0115 13:51:03.060924 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.9.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:03.070764 kubelet[2358]: I0115 13:51:03.070739 2358 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 13:51:03.070764 kubelet[2358]: I0115 13:51:03.070764 2358 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 13:51:03.071077 kubelet[2358]: I0115 13:51:03.070799 2358 state_mem.go:36] "Initialized new in-memory state store" Jan 15 13:51:03.072822 kubelet[2358]: I0115 13:51:03.072797 2358 policy_none.go:49] "None policy: Start" Jan 15 13:51:03.073638 kubelet[2358]: I0115 13:51:03.073613 2358 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 13:51:03.074052 kubelet[2358]: I0115 13:51:03.073907 2358 state_mem.go:35] "Initializing new in-memory state store" Jan 15 13:51:03.087130 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 13:51:03.104830 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 13:51:03.110051 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 13:51:03.119926 kubelet[2358]: I0115 13:51:03.119770 2358 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 13:51:03.120172 kubelet[2358]: I0115 13:51:03.120148 2358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 13:51:03.123773 kubelet[2358]: I0115 13:51:03.122762 2358 kubelet_node_status.go:73] "Attempting to register node" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.123773 kubelet[2358]: E0115 13:51:03.123241 2358 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.9.202:6443/api/v1/nodes\": dial tcp 10.230.9.202:6443: connect: connection refused" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.125370 kubelet[2358]: E0115 13:51:03.125348 2358 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-e1jz5.gb1.brightbox.com\" not found" Jan 15 13:51:03.159557 kubelet[2358]: I0115 13:51:03.159463 2358 topology_manager.go:215] "Topology Admit Handler" podUID="2743ab3be0bf41b95080878d2a2a37d4" podNamespace="kube-system" podName="kube-apiserver-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.162750 kubelet[2358]: I0115 13:51:03.162725 2358 topology_manager.go:215] "Topology Admit Handler" podUID="a96151beee966b42b3c56b79280b022d" podNamespace="kube-system" podName="kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.165649 kubelet[2358]: I0115 13:51:03.165610 2358 topology_manager.go:215] "Topology Admit Handler" podUID="88863b2feaf06df6c84bab9496c0a248" podNamespace="kube-system" podName="kube-scheduler-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.176676 systemd[1]: Created slice kubepods-burstable-pod2743ab3be0bf41b95080878d2a2a37d4.slice - libcontainer container kubepods-burstable-pod2743ab3be0bf41b95080878d2a2a37d4.slice. Jan 15 13:51:03.187934 systemd[1]: Created slice kubepods-burstable-poda96151beee966b42b3c56b79280b022d.slice - libcontainer container kubepods-burstable-poda96151beee966b42b3c56b79280b022d.slice. Jan 15 13:51:03.200720 systemd[1]: Created slice kubepods-burstable-pod88863b2feaf06df6c84bab9496c0a248.slice - libcontainer container kubepods-burstable-pod88863b2feaf06df6c84bab9496c0a248.slice. Jan 15 13:51:03.225662 kubelet[2358]: E0115 13:51:03.225619 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.9.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-e1jz5.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.9.202:6443: connect: connection refused" interval="400ms" Jan 15 13:51:03.325806 kubelet[2358]: I0115 13:51:03.325605 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88863b2feaf06df6c84bab9496c0a248-kubeconfig\") pod \"kube-scheduler-srv-e1jz5.gb1.brightbox.com\" (UID: \"88863b2feaf06df6c84bab9496c0a248\") " pod="kube-system/kube-scheduler-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.325806 kubelet[2358]: I0115 13:51:03.325662 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2743ab3be0bf41b95080878d2a2a37d4-ca-certs\") pod \"kube-apiserver-srv-e1jz5.gb1.brightbox.com\" (UID: \"2743ab3be0bf41b95080878d2a2a37d4\") " pod="kube-system/kube-apiserver-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.325806 kubelet[2358]: I0115 13:51:03.325720 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a96151beee966b42b3c56b79280b022d-flexvolume-dir\") pod \"kube-controller-manager-srv-e1jz5.gb1.brightbox.com\" (UID: \"a96151beee966b42b3c56b79280b022d\") " pod="kube-system/kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.325806 kubelet[2358]: I0115 13:51:03.325765 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a96151beee966b42b3c56b79280b022d-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-e1jz5.gb1.brightbox.com\" (UID: \"a96151beee966b42b3c56b79280b022d\") " pod="kube-system/kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.325806 kubelet[2358]: I0115 13:51:03.325813 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a96151beee966b42b3c56b79280b022d-kubeconfig\") pod \"kube-controller-manager-srv-e1jz5.gb1.brightbox.com\" (UID: \"a96151beee966b42b3c56b79280b022d\") " pod="kube-system/kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.326189 kubelet[2358]: I0115 13:51:03.325856 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2743ab3be0bf41b95080878d2a2a37d4-k8s-certs\") pod \"kube-apiserver-srv-e1jz5.gb1.brightbox.com\" (UID: \"2743ab3be0bf41b95080878d2a2a37d4\") " pod="kube-system/kube-apiserver-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.326189 kubelet[2358]: I0115 13:51:03.325886 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2743ab3be0bf41b95080878d2a2a37d4-usr-share-ca-certificates\") pod \"kube-apiserver-srv-e1jz5.gb1.brightbox.com\" (UID: \"2743ab3be0bf41b95080878d2a2a37d4\") " pod="kube-system/kube-apiserver-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.326189 kubelet[2358]: I0115 13:51:03.325948 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a96151beee966b42b3c56b79280b022d-ca-certs\") pod \"kube-controller-manager-srv-e1jz5.gb1.brightbox.com\" (UID: \"a96151beee966b42b3c56b79280b022d\") " pod="kube-system/kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.326189 kubelet[2358]: I0115 13:51:03.325980 2358 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a96151beee966b42b3c56b79280b022d-k8s-certs\") pod \"kube-controller-manager-srv-e1jz5.gb1.brightbox.com\" (UID: \"a96151beee966b42b3c56b79280b022d\") " pod="kube-system/kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.334890 kubelet[2358]: I0115 13:51:03.334852 2358 kubelet_node_status.go:73] "Attempting to register node" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.335369 kubelet[2358]: E0115 13:51:03.335267 2358 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.9.202:6443/api/v1/nodes\": dial tcp 10.230.9.202:6443: connect: connection refused" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.485844 containerd[1506]: time="2025-01-15T13:51:03.485648265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-e1jz5.gb1.brightbox.com,Uid:2743ab3be0bf41b95080878d2a2a37d4,Namespace:kube-system,Attempt:0,}" Jan 15 13:51:03.502227 containerd[1506]: time="2025-01-15T13:51:03.502148633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-e1jz5.gb1.brightbox.com,Uid:a96151beee966b42b3c56b79280b022d,Namespace:kube-system,Attempt:0,}" Jan 15 13:51:03.504488 containerd[1506]: time="2025-01-15T13:51:03.504454815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-e1jz5.gb1.brightbox.com,Uid:88863b2feaf06df6c84bab9496c0a248,Namespace:kube-system,Attempt:0,}" Jan 15 13:51:03.627600 kubelet[2358]: E0115 13:51:03.627421 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.9.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-e1jz5.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.9.202:6443: connect: connection refused" interval="800ms" Jan 15 13:51:03.738414 kubelet[2358]: I0115 13:51:03.738247 2358 kubelet_node_status.go:73] "Attempting to register node" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.738942 kubelet[2358]: E0115 13:51:03.738909 2358 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.9.202:6443/api/v1/nodes\": dial tcp 10.230.9.202:6443: connect: connection refused" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:03.867191 kubelet[2358]: W0115 13:51:03.867088 2358 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.9.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:03.867191 kubelet[2358]: E0115 13:51:03.867181 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.9.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:04.047494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251255174.mount: Deactivated successfully. Jan 15 13:51:04.051981 containerd[1506]: time="2025-01-15T13:51:04.050847575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 13:51:04.053213 containerd[1506]: time="2025-01-15T13:51:04.053170978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 15 13:51:04.055865 containerd[1506]: time="2025-01-15T13:51:04.055828497Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 13:51:04.057417 containerd[1506]: time="2025-01-15T13:51:04.057373022Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 13:51:04.059640 containerd[1506]: time="2025-01-15T13:51:04.059591597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 13:51:04.059755 containerd[1506]: time="2025-01-15T13:51:04.059722893Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 13:51:04.068224 containerd[1506]: time="2025-01-15T13:51:04.068187554Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 13:51:04.070239 containerd[1506]: time="2025-01-15T13:51:04.070202222Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.674406ms" Jan 15 13:51:04.072249 containerd[1506]: time="2025-01-15T13:51:04.072125391Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.903944ms" Jan 15 13:51:04.074783 containerd[1506]: time="2025-01-15T13:51:04.074631055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 13:51:04.075331 containerd[1506]: time="2025-01-15T13:51:04.075179220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.398349ms" Jan 15 13:51:04.172332 kubelet[2358]: W0115 13:51:04.172170 2358 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.9.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:04.172332 kubelet[2358]: E0115 13:51:04.172239 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.9.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:04.296689 containerd[1506]: time="2025-01-15T13:51:04.296536667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:51:04.301390 containerd[1506]: time="2025-01-15T13:51:04.297656169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:51:04.301390 containerd[1506]: time="2025-01-15T13:51:04.297687207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:04.301390 containerd[1506]: time="2025-01-15T13:51:04.297834187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:04.307445 containerd[1506]: time="2025-01-15T13:51:04.307337328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:51:04.307551 containerd[1506]: time="2025-01-15T13:51:04.307493185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:51:04.308710 containerd[1506]: time="2025-01-15T13:51:04.307594351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:04.308710 containerd[1506]: time="2025-01-15T13:51:04.307864461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:51:04.308710 containerd[1506]: time="2025-01-15T13:51:04.307927588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:51:04.308710 containerd[1506]: time="2025-01-15T13:51:04.307955893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:04.308710 containerd[1506]: time="2025-01-15T13:51:04.308080413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:04.316372 containerd[1506]: time="2025-01-15T13:51:04.314371419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:04.347498 systemd[1]: Started cri-containerd-1664126eca4c35fcd79a2b50b3d60eb8f2739731cc7c05d49844c5e6bbfe8736.scope - libcontainer container 1664126eca4c35fcd79a2b50b3d60eb8f2739731cc7c05d49844c5e6bbfe8736. Jan 15 13:51:04.354550 systemd[1]: Started cri-containerd-237d702b843d17f956aa6f342893cb4eee25f0f5af855ef227052148be6782b4.scope - libcontainer container 237d702b843d17f956aa6f342893cb4eee25f0f5af855ef227052148be6782b4. Jan 15 13:51:04.368576 systemd[1]: Started cri-containerd-85a18e4925954a2048cbaf442eae34489011613f6e1a83755adca9bcd3dbb9cf.scope - libcontainer container 85a18e4925954a2048cbaf442eae34489011613f6e1a83755adca9bcd3dbb9cf. Jan 15 13:51:04.429827 kubelet[2358]: E0115 13:51:04.428350 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.9.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-e1jz5.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.9.202:6443: connect: connection refused" interval="1.6s" Jan 15 13:51:04.430499 kubelet[2358]: W0115 13:51:04.430443 2358 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.9.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-e1jz5.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:04.430873 kubelet[2358]: E0115 13:51:04.430846 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.9.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-e1jz5.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:04.461850 kubelet[2358]: W0115 13:51:04.461753 2358 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.9.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:04.461850 kubelet[2358]: E0115 13:51:04.461825 2358 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.9.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:04.463727 containerd[1506]: time="2025-01-15T13:51:04.463023310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-e1jz5.gb1.brightbox.com,Uid:2743ab3be0bf41b95080878d2a2a37d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"237d702b843d17f956aa6f342893cb4eee25f0f5af855ef227052148be6782b4\"" Jan 15 13:51:04.472412 containerd[1506]: time="2025-01-15T13:51:04.472364503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-e1jz5.gb1.brightbox.com,Uid:a96151beee966b42b3c56b79280b022d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1664126eca4c35fcd79a2b50b3d60eb8f2739731cc7c05d49844c5e6bbfe8736\"" Jan 15 13:51:04.478686 containerd[1506]: time="2025-01-15T13:51:04.478634394Z" level=info msg="CreateContainer within sandbox \"1664126eca4c35fcd79a2b50b3d60eb8f2739731cc7c05d49844c5e6bbfe8736\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 13:51:04.478887 containerd[1506]: time="2025-01-15T13:51:04.478848746Z" level=info msg="CreateContainer within sandbox \"237d702b843d17f956aa6f342893cb4eee25f0f5af855ef227052148be6782b4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 13:51:04.488620 containerd[1506]: time="2025-01-15T13:51:04.488588380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-e1jz5.gb1.brightbox.com,Uid:88863b2feaf06df6c84bab9496c0a248,Namespace:kube-system,Attempt:0,} returns sandbox id \"85a18e4925954a2048cbaf442eae34489011613f6e1a83755adca9bcd3dbb9cf\"" Jan 15 13:51:04.492344 containerd[1506]: time="2025-01-15T13:51:04.492260529Z" level=info msg="CreateContainer within sandbox \"85a18e4925954a2048cbaf442eae34489011613f6e1a83755adca9bcd3dbb9cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 13:51:04.513626 containerd[1506]: time="2025-01-15T13:51:04.513559648Z" level=info msg="CreateContainer within sandbox \"1664126eca4c35fcd79a2b50b3d60eb8f2739731cc7c05d49844c5e6bbfe8736\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d1a2ce0651d6b9dfafe78fd3a788762d8eca227eea48c2279db0b87ec3a2c434\"" Jan 15 13:51:04.514879 containerd[1506]: time="2025-01-15T13:51:04.514467454Z" level=info msg="StartContainer for \"d1a2ce0651d6b9dfafe78fd3a788762d8eca227eea48c2279db0b87ec3a2c434\"" Jan 15 13:51:04.516670 containerd[1506]: time="2025-01-15T13:51:04.516436858Z" level=info msg="CreateContainer within sandbox \"237d702b843d17f956aa6f342893cb4eee25f0f5af855ef227052148be6782b4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de278007787539e50d8ec33864a83465f601f5c33bb6ef680e5b1b1a9bc439bc\"" Jan 15 13:51:04.517333 containerd[1506]: time="2025-01-15T13:51:04.517079276Z" level=info msg="StartContainer for \"de278007787539e50d8ec33864a83465f601f5c33bb6ef680e5b1b1a9bc439bc\"" Jan 15 13:51:04.524474 containerd[1506]: time="2025-01-15T13:51:04.524433368Z" level=info msg="CreateContainer within sandbox \"85a18e4925954a2048cbaf442eae34489011613f6e1a83755adca9bcd3dbb9cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a6259e60cc365a41170183fceb769916574c899127a4b3feb08ab6cd978d165d\"" Jan 15 13:51:04.525283 containerd[1506]: time="2025-01-15T13:51:04.525239446Z" level=info msg="StartContainer for \"a6259e60cc365a41170183fceb769916574c899127a4b3feb08ab6cd978d165d\"" Jan 15 13:51:04.542878 kubelet[2358]: I0115 13:51:04.542831 2358 kubelet_node_status.go:73] "Attempting to register node" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:04.543330 kubelet[2358]: E0115 13:51:04.543284 2358 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.9.202:6443/api/v1/nodes\": dial tcp 10.230.9.202:6443: connect: connection refused" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:04.564483 systemd[1]: Started cri-containerd-de278007787539e50d8ec33864a83465f601f5c33bb6ef680e5b1b1a9bc439bc.scope - libcontainer container de278007787539e50d8ec33864a83465f601f5c33bb6ef680e5b1b1a9bc439bc. Jan 15 13:51:04.573485 systemd[1]: Started cri-containerd-d1a2ce0651d6b9dfafe78fd3a788762d8eca227eea48c2279db0b87ec3a2c434.scope - libcontainer container d1a2ce0651d6b9dfafe78fd3a788762d8eca227eea48c2279db0b87ec3a2c434. Jan 15 13:51:04.597488 systemd[1]: Started cri-containerd-a6259e60cc365a41170183fceb769916574c899127a4b3feb08ab6cd978d165d.scope - libcontainer container a6259e60cc365a41170183fceb769916574c899127a4b3feb08ab6cd978d165d. Jan 15 13:51:04.694339 containerd[1506]: time="2025-01-15T13:51:04.693256536Z" level=info msg="StartContainer for \"de278007787539e50d8ec33864a83465f601f5c33bb6ef680e5b1b1a9bc439bc\" returns successfully" Jan 15 13:51:04.695588 containerd[1506]: time="2025-01-15T13:51:04.694922389Z" level=info msg="StartContainer for \"d1a2ce0651d6b9dfafe78fd3a788762d8eca227eea48c2279db0b87ec3a2c434\" returns successfully" Jan 15 13:51:04.697378 containerd[1506]: time="2025-01-15T13:51:04.697345254Z" level=info msg="StartContainer for \"a6259e60cc365a41170183fceb769916574c899127a4b3feb08ab6cd978d165d\" returns successfully" Jan 15 13:51:05.063570 kubelet[2358]: E0115 13:51:05.063519 2358 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.9.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.9.202:6443: connect: connection refused Jan 15 13:51:06.148028 kubelet[2358]: I0115 13:51:06.147990 2358 kubelet_node_status.go:73] "Attempting to register node" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:07.353281 kubelet[2358]: E0115 13:51:07.353214 2358 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-e1jz5.gb1.brightbox.com\" not found" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:07.398775 kubelet[2358]: I0115 13:51:07.398685 2358 kubelet_node_status.go:76] "Successfully registered node" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:08.000752 kubelet[2358]: I0115 13:51:08.000373 2358 apiserver.go:52] "Watching apiserver" Jan 15 13:51:08.024869 kubelet[2358]: I0115 13:51:08.024802 2358 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 15 13:51:10.417088 systemd[1]: Reloading requested from client PID 2634 ('systemctl') (unit session-11.scope)... Jan 15 13:51:10.417114 systemd[1]: Reloading... Jan 15 13:51:10.538453 zram_generator::config[2672]: No configuration found. Jan 15 13:51:10.735831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 13:51:10.861102 systemd[1]: Reloading finished in 443 ms. Jan 15 13:51:10.933051 kubelet[2358]: I0115 13:51:10.933001 2358 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 13:51:10.935538 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:51:10.950177 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 13:51:10.950723 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:51:10.950859 systemd[1]: kubelet.service: Consumed 1.734s CPU time, 107.7M memory peak, 0B memory swap peak. Jan 15 13:51:10.957761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:51:11.138183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:51:11.144834 (kubelet)[2737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 13:51:11.315119 kubelet[2737]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 13:51:11.315119 kubelet[2737]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 13:51:11.315119 kubelet[2737]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 13:51:11.315746 kubelet[2737]: I0115 13:51:11.315208 2737 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 13:51:11.324187 kubelet[2737]: I0115 13:51:11.324151 2737 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 15 13:51:11.324412 kubelet[2737]: I0115 13:51:11.324384 2737 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 13:51:11.324842 kubelet[2737]: I0115 13:51:11.324819 2737 server.go:919] "Client rotation is on, will bootstrap in background" Jan 15 13:51:11.327566 kubelet[2737]: I0115 13:51:11.327529 2737 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 13:51:11.331662 kubelet[2737]: I0115 13:51:11.331619 2737 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 13:51:11.351868 kubelet[2737]: I0115 13:51:11.351596 2737 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 13:51:11.352452 kubelet[2737]: I0115 13:51:11.352412 2737 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 13:51:11.353352 kubelet[2737]: I0115 13:51:11.352706 2737 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 13:51:11.353352 kubelet[2737]: I0115 13:51:11.352770 2737 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 13:51:11.353352 kubelet[2737]: I0115 13:51:11.352789 2737 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 13:51:11.355968 kubelet[2737]: I0115 13:51:11.355724 2737 state_mem.go:36] "Initialized new in-memory state store" Jan 15 13:51:11.356107 kubelet[2737]: I0115 13:51:11.356078 2737 kubelet.go:396] "Attempting to sync node with API server" Jan 15 13:51:11.356863 kubelet[2737]: I0115 13:51:11.356829 2737 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 13:51:11.358043 kubelet[2737]: I0115 13:51:11.356985 2737 kubelet.go:312] "Adding apiserver pod source" Jan 15 13:51:11.358259 kubelet[2737]: I0115 13:51:11.358072 2737 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 13:51:11.364375 kubelet[2737]: I0115 13:51:11.363590 2737 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 13:51:11.364375 kubelet[2737]: I0115 13:51:11.363948 2737 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 13:51:11.364761 kubelet[2737]: I0115 13:51:11.364723 2737 server.go:1256] "Started kubelet" Jan 15 13:51:11.365957 kubelet[2737]: I0115 13:51:11.365931 2737 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 13:51:11.367919 kubelet[2737]: I0115 13:51:11.367884 2737 server.go:461] "Adding debug handlers to kubelet server" Jan 15 13:51:11.369465 kubelet[2737]: I0115 13:51:11.369436 2737 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 13:51:11.369721 kubelet[2737]: I0115 13:51:11.369695 2737 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 13:51:11.376492 kubelet[2737]: I0115 13:51:11.372201 2737 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 13:51:11.399071 kubelet[2737]: I0115 13:51:11.398931 2737 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 13:51:11.399969 kubelet[2737]: I0115 13:51:11.399943 2737 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 15 13:51:11.400327 kubelet[2737]: I0115 13:51:11.400292 2737 reconciler_new.go:29] "Reconciler: start to sync state" Jan 15 13:51:11.412463 kubelet[2737]: E0115 13:51:11.412428 2737 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 13:51:11.412818 kubelet[2737]: I0115 13:51:11.412764 2737 factory.go:221] Registration of the containerd container factory successfully Jan 15 13:51:11.412818 kubelet[2737]: I0115 13:51:11.412818 2737 factory.go:221] Registration of the systemd container factory successfully Jan 15 13:51:11.412968 kubelet[2737]: I0115 13:51:11.412934 2737 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 13:51:11.419692 kubelet[2737]: I0115 13:51:11.419644 2737 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 13:51:11.421602 kubelet[2737]: I0115 13:51:11.421579 2737 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 13:51:11.421749 kubelet[2737]: I0115 13:51:11.421728 2737 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 13:51:11.421889 kubelet[2737]: I0115 13:51:11.421867 2737 kubelet.go:2329] "Starting kubelet main sync loop" Jan 15 13:51:11.422361 kubelet[2737]: E0115 13:51:11.422065 2737 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 13:51:11.513780 kubelet[2737]: I0115 13:51:11.513727 2737 kubelet_node_status.go:73] "Attempting to register node" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.522285 kubelet[2737]: E0115 13:51:11.522144 2737 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 13:51:11.528360 kubelet[2737]: I0115 13:51:11.527250 2737 kubelet_node_status.go:112] "Node was previously registered" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.528360 kubelet[2737]: I0115 13:51:11.527392 2737 kubelet_node_status.go:76] "Successfully registered node" node="srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.542131 kubelet[2737]: I0115 13:51:11.542073 2737 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 13:51:11.542131 kubelet[2737]: I0115 13:51:11.542111 2737 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 13:51:11.542569 kubelet[2737]: I0115 13:51:11.542145 2737 state_mem.go:36] "Initialized new in-memory state store" Jan 15 13:51:11.542569 kubelet[2737]: I0115 13:51:11.542428 2737 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 13:51:11.542569 kubelet[2737]: I0115 13:51:11.542469 2737 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 13:51:11.542569 kubelet[2737]: I0115 13:51:11.542489 2737 policy_none.go:49] "None policy: Start" Jan 15 13:51:11.545790 kubelet[2737]: I0115 13:51:11.545100 2737 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 13:51:11.545790 kubelet[2737]: I0115 13:51:11.545154 2737 state_mem.go:35] "Initializing new in-memory state store" Jan 15 13:51:11.547993 kubelet[2737]: I0115 13:51:11.547962 2737 state_mem.go:75] "Updated machine memory state" Jan 15 13:51:11.560369 kubelet[2737]: I0115 13:51:11.560239 2737 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 13:51:11.562031 kubelet[2737]: I0115 13:51:11.561810 2737 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 13:51:11.723449 kubelet[2737]: I0115 13:51:11.722565 2737 topology_manager.go:215] "Topology Admit Handler" podUID="2743ab3be0bf41b95080878d2a2a37d4" podNamespace="kube-system" podName="kube-apiserver-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.723449 kubelet[2737]: I0115 13:51:11.722756 2737 topology_manager.go:215] "Topology Admit Handler" podUID="a96151beee966b42b3c56b79280b022d" podNamespace="kube-system" podName="kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.723449 kubelet[2737]: I0115 13:51:11.722846 2737 topology_manager.go:215] "Topology Admit Handler" podUID="88863b2feaf06df6c84bab9496c0a248" podNamespace="kube-system" podName="kube-scheduler-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.734950 kubelet[2737]: W0115 13:51:11.734170 2737 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 13:51:11.735219 kubelet[2737]: W0115 13:51:11.735184 2737 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 13:51:11.736690 kubelet[2737]: W0115 13:51:11.736515 2737 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 13:51:11.802595 kubelet[2737]: I0115 13:51:11.802150 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a96151beee966b42b3c56b79280b022d-ca-certs\") pod \"kube-controller-manager-srv-e1jz5.gb1.brightbox.com\" (UID: \"a96151beee966b42b3c56b79280b022d\") " pod="kube-system/kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.802595 kubelet[2737]: I0115 13:51:11.802211 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a96151beee966b42b3c56b79280b022d-flexvolume-dir\") pod \"kube-controller-manager-srv-e1jz5.gb1.brightbox.com\" (UID: \"a96151beee966b42b3c56b79280b022d\") " pod="kube-system/kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.802595 kubelet[2737]: I0115 13:51:11.802281 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a96151beee966b42b3c56b79280b022d-k8s-certs\") pod \"kube-controller-manager-srv-e1jz5.gb1.brightbox.com\" (UID: \"a96151beee966b42b3c56b79280b022d\") " pod="kube-system/kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.802595 kubelet[2737]: I0115 13:51:11.802353 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a96151beee966b42b3c56b79280b022d-kubeconfig\") pod \"kube-controller-manager-srv-e1jz5.gb1.brightbox.com\" (UID: \"a96151beee966b42b3c56b79280b022d\") " pod="kube-system/kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.802595 kubelet[2737]: I0115 13:51:11.802397 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88863b2feaf06df6c84bab9496c0a248-kubeconfig\") pod \"kube-scheduler-srv-e1jz5.gb1.brightbox.com\" (UID: \"88863b2feaf06df6c84bab9496c0a248\") " pod="kube-system/kube-scheduler-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.803099 kubelet[2737]: I0115 13:51:11.802438 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2743ab3be0bf41b95080878d2a2a37d4-ca-certs\") pod \"kube-apiserver-srv-e1jz5.gb1.brightbox.com\" (UID: \"2743ab3be0bf41b95080878d2a2a37d4\") " pod="kube-system/kube-apiserver-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.803099 kubelet[2737]: I0115 13:51:11.802472 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2743ab3be0bf41b95080878d2a2a37d4-k8s-certs\") pod \"kube-apiserver-srv-e1jz5.gb1.brightbox.com\" (UID: \"2743ab3be0bf41b95080878d2a2a37d4\") " pod="kube-system/kube-apiserver-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.803957 kubelet[2737]: I0115 13:51:11.803490 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2743ab3be0bf41b95080878d2a2a37d4-usr-share-ca-certificates\") pod \"kube-apiserver-srv-e1jz5.gb1.brightbox.com\" (UID: \"2743ab3be0bf41b95080878d2a2a37d4\") " pod="kube-system/kube-apiserver-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:11.804082 kubelet[2737]: I0115 13:51:11.804062 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a96151beee966b42b3c56b79280b022d-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-e1jz5.gb1.brightbox.com\" (UID: \"a96151beee966b42b3c56b79280b022d\") " pod="kube-system/kube-controller-manager-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:12.362505 kubelet[2737]: I0115 13:51:12.362442 2737 apiserver.go:52] "Watching apiserver" Jan 15 13:51:12.400678 kubelet[2737]: I0115 13:51:12.400620 2737 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 15 13:51:12.497559 kubelet[2737]: W0115 13:51:12.497503 2737 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 13:51:12.497777 kubelet[2737]: E0115 13:51:12.497623 2737 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-e1jz5.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-e1jz5.gb1.brightbox.com" Jan 15 13:51:12.575157 kubelet[2737]: I0115 13:51:12.574917 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-e1jz5.gb1.brightbox.com" podStartSLOduration=1.574827363 podStartE2EDuration="1.574827363s" podCreationTimestamp="2025-01-15 13:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:51:12.559462418 +0000 UTC m=+1.366910784" watchObservedRunningTime="2025-01-15 13:51:12.574827363 +0000 UTC m=+1.382275719" Jan 15 13:51:12.593344 kubelet[2737]: I0115 13:51:12.593132 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-e1jz5.gb1.brightbox.com" podStartSLOduration=1.593094604 podStartE2EDuration="1.593094604s" podCreationTimestamp="2025-01-15 13:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:51:12.576220309 +0000 UTC m=+1.383668685" watchObservedRunningTime="2025-01-15 13:51:12.593094604 +0000 UTC m=+1.400542969" Jan 15 13:51:12.633418 kubelet[2737]: I0115 13:51:12.633201 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-e1jz5.gb1.brightbox.com" podStartSLOduration=1.633149913 podStartE2EDuration="1.633149913s" podCreationTimestamp="2025-01-15 13:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:51:12.598394182 +0000 UTC m=+1.405842552" watchObservedRunningTime="2025-01-15 13:51:12.633149913 +0000 UTC m=+1.440598300" Jan 15 13:51:16.997530 sudo[1780]: pam_unix(sudo:session): session closed for user root Jan 15 13:51:17.143446 sshd[1777]: pam_unix(sshd:session): session closed for user core Jan 15 13:51:17.149061 systemd[1]: sshd@8-10.230.9.202:22-147.75.109.163:41878.service: Deactivated successfully. Jan 15 13:51:17.153924 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 13:51:17.154629 systemd[1]: session-11.scope: Consumed 6.954s CPU time, 186.3M memory peak, 0B memory swap peak. Jan 15 13:51:17.157668 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Jan 15 13:51:17.159442 systemd-logind[1487]: Removed session 11. Jan 15 13:51:23.041694 kubelet[2737]: I0115 13:51:23.041561 2737 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 13:51:23.042980 containerd[1506]: time="2025-01-15T13:51:23.042868481Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 13:51:23.043486 kubelet[2737]: I0115 13:51:23.043277 2737 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 13:51:23.973421 kubelet[2737]: I0115 13:51:23.972913 2737 topology_manager.go:215] "Topology Admit Handler" podUID="1f6af989-68b4-4232-bf47-4faaa6e9ec93" podNamespace="kube-system" podName="kube-proxy-7jshj" Jan 15 13:51:23.988243 kubelet[2737]: I0115 13:51:23.988200 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1f6af989-68b4-4232-bf47-4faaa6e9ec93-kube-proxy\") pod \"kube-proxy-7jshj\" (UID: \"1f6af989-68b4-4232-bf47-4faaa6e9ec93\") " pod="kube-system/kube-proxy-7jshj" Jan 15 13:51:23.988503 kubelet[2737]: I0115 13:51:23.988476 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f6af989-68b4-4232-bf47-4faaa6e9ec93-xtables-lock\") pod \"kube-proxy-7jshj\" (UID: \"1f6af989-68b4-4232-bf47-4faaa6e9ec93\") " pod="kube-system/kube-proxy-7jshj" Jan 15 13:51:23.988658 kubelet[2737]: I0115 13:51:23.988634 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f6af989-68b4-4232-bf47-4faaa6e9ec93-lib-modules\") pod \"kube-proxy-7jshj\" (UID: \"1f6af989-68b4-4232-bf47-4faaa6e9ec93\") " pod="kube-system/kube-proxy-7jshj" Jan 15 13:51:23.988833 kubelet[2737]: I0115 13:51:23.988809 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6tdt\" (UniqueName: \"kubernetes.io/projected/1f6af989-68b4-4232-bf47-4faaa6e9ec93-kube-api-access-j6tdt\") pod \"kube-proxy-7jshj\" (UID: \"1f6af989-68b4-4232-bf47-4faaa6e9ec93\") " pod="kube-system/kube-proxy-7jshj" Jan 15 13:51:23.990917 systemd[1]: Created slice kubepods-besteffort-pod1f6af989_68b4_4232_bf47_4faaa6e9ec93.slice - libcontainer container kubepods-besteffort-pod1f6af989_68b4_4232_bf47_4faaa6e9ec93.slice. Jan 15 13:51:24.167277 kubelet[2737]: I0115 13:51:24.165851 2737 topology_manager.go:215] "Topology Admit Handler" podUID="dfd80c4c-80a8-420b-bc58-f604be578316" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-544wj" Jan 15 13:51:24.180842 systemd[1]: Created slice kubepods-besteffort-poddfd80c4c_80a8_420b_bc58_f604be578316.slice - libcontainer container kubepods-besteffort-poddfd80c4c_80a8_420b_bc58_f604be578316.slice. Jan 15 13:51:24.191353 kubelet[2737]: I0115 13:51:24.191256 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dfd80c4c-80a8-420b-bc58-f604be578316-var-lib-calico\") pod \"tigera-operator-c7ccbd65-544wj\" (UID: \"dfd80c4c-80a8-420b-bc58-f604be578316\") " pod="tigera-operator/tigera-operator-c7ccbd65-544wj" Jan 15 13:51:24.191353 kubelet[2737]: I0115 13:51:24.191341 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lnml\" (UniqueName: \"kubernetes.io/projected/dfd80c4c-80a8-420b-bc58-f604be578316-kube-api-access-4lnml\") pod \"tigera-operator-c7ccbd65-544wj\" (UID: \"dfd80c4c-80a8-420b-bc58-f604be578316\") " pod="tigera-operator/tigera-operator-c7ccbd65-544wj" Jan 15 13:51:24.310677 containerd[1506]: time="2025-01-15T13:51:24.308859558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7jshj,Uid:1f6af989-68b4-4232-bf47-4faaa6e9ec93,Namespace:kube-system,Attempt:0,}" Jan 15 13:51:24.345383 containerd[1506]: time="2025-01-15T13:51:24.345209043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:51:24.346126 containerd[1506]: time="2025-01-15T13:51:24.345293175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:51:24.346764 containerd[1506]: time="2025-01-15T13:51:24.346689320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:24.347005 containerd[1506]: time="2025-01-15T13:51:24.346816875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:24.376515 systemd[1]: Started cri-containerd-81c849c04e20e99491663b5b6dc12f4e622fd869d45e8a37e68b9d091f1a0e51.scope - libcontainer container 81c849c04e20e99491663b5b6dc12f4e622fd869d45e8a37e68b9d091f1a0e51. Jan 15 13:51:24.411500 containerd[1506]: time="2025-01-15T13:51:24.411450014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7jshj,Uid:1f6af989-68b4-4232-bf47-4faaa6e9ec93,Namespace:kube-system,Attempt:0,} returns sandbox id \"81c849c04e20e99491663b5b6dc12f4e622fd869d45e8a37e68b9d091f1a0e51\"" Jan 15 13:51:24.419189 containerd[1506]: time="2025-01-15T13:51:24.419150424Z" level=info msg="CreateContainer within sandbox \"81c849c04e20e99491663b5b6dc12f4e622fd869d45e8a37e68b9d091f1a0e51\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 13:51:24.436518 containerd[1506]: time="2025-01-15T13:51:24.436411251Z" level=info msg="CreateContainer within sandbox \"81c849c04e20e99491663b5b6dc12f4e622fd869d45e8a37e68b9d091f1a0e51\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c38ae2a0e0e58e58775b1c5b6ecf633a7ade19be0477d0ea3a851b984e636d8\"" Jan 15 13:51:24.437908 containerd[1506]: time="2025-01-15T13:51:24.437243500Z" level=info msg="StartContainer for \"5c38ae2a0e0e58e58775b1c5b6ecf633a7ade19be0477d0ea3a851b984e636d8\"" Jan 15 13:51:24.477561 systemd[1]: Started cri-containerd-5c38ae2a0e0e58e58775b1c5b6ecf633a7ade19be0477d0ea3a851b984e636d8.scope - libcontainer container 5c38ae2a0e0e58e58775b1c5b6ecf633a7ade19be0477d0ea3a851b984e636d8. Jan 15 13:51:24.487124 containerd[1506]: time="2025-01-15T13:51:24.487026476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-544wj,Uid:dfd80c4c-80a8-420b-bc58-f604be578316,Namespace:tigera-operator,Attempt:0,}" Jan 15 13:51:24.536468 containerd[1506]: time="2025-01-15T13:51:24.536187797Z" level=info msg="StartContainer for \"5c38ae2a0e0e58e58775b1c5b6ecf633a7ade19be0477d0ea3a851b984e636d8\" returns successfully" Jan 15 13:51:24.540755 containerd[1506]: time="2025-01-15T13:51:24.540626279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:51:24.540844 containerd[1506]: time="2025-01-15T13:51:24.540799320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:51:24.540909 containerd[1506]: time="2025-01-15T13:51:24.540860691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:24.541454 containerd[1506]: time="2025-01-15T13:51:24.541037794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:24.575635 systemd[1]: Started cri-containerd-f9382106b02200aa472defc3410e0b4c0e48a41b01f5793683a0c125aea0c539.scope - libcontainer container f9382106b02200aa472defc3410e0b4c0e48a41b01f5793683a0c125aea0c539. Jan 15 13:51:24.649143 containerd[1506]: time="2025-01-15T13:51:24.649085927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-544wj,Uid:dfd80c4c-80a8-420b-bc58-f604be578316,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f9382106b02200aa472defc3410e0b4c0e48a41b01f5793683a0c125aea0c539\"" Jan 15 13:51:24.652322 containerd[1506]: time="2025-01-15T13:51:24.652274238Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 15 13:51:27.151544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825497962.mount: Deactivated successfully. Jan 15 13:51:28.009542 containerd[1506]: time="2025-01-15T13:51:28.009491016Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:28.012337 containerd[1506]: time="2025-01-15T13:51:28.010950192Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764345" Jan 15 13:51:28.012337 containerd[1506]: time="2025-01-15T13:51:28.011199779Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:28.019848 containerd[1506]: time="2025-01-15T13:51:28.019790451Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:28.021111 containerd[1506]: time="2025-01-15T13:51:28.021071827Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.368543987s" Jan 15 13:51:28.021265 containerd[1506]: time="2025-01-15T13:51:28.021231742Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 15 13:51:28.036665 containerd[1506]: time="2025-01-15T13:51:28.036615322Z" level=info msg="CreateContainer within sandbox \"f9382106b02200aa472defc3410e0b4c0e48a41b01f5793683a0c125aea0c539\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 13:51:28.077583 containerd[1506]: time="2025-01-15T13:51:28.077538866Z" level=info msg="CreateContainer within sandbox \"f9382106b02200aa472defc3410e0b4c0e48a41b01f5793683a0c125aea0c539\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"74d2b0c538709ca1ecef307e111fbc480471e464d9036059b06ba94b21a4ec4d\"" Jan 15 13:51:28.079571 containerd[1506]: time="2025-01-15T13:51:28.079538568Z" level=info msg="StartContainer for \"74d2b0c538709ca1ecef307e111fbc480471e464d9036059b06ba94b21a4ec4d\"" Jan 15 13:51:28.129504 systemd[1]: run-containerd-runc-k8s.io-74d2b0c538709ca1ecef307e111fbc480471e464d9036059b06ba94b21a4ec4d-runc.zvm8JC.mount: Deactivated successfully. Jan 15 13:51:28.140714 systemd[1]: Started cri-containerd-74d2b0c538709ca1ecef307e111fbc480471e464d9036059b06ba94b21a4ec4d.scope - libcontainer container 74d2b0c538709ca1ecef307e111fbc480471e464d9036059b06ba94b21a4ec4d. Jan 15 13:51:28.175631 containerd[1506]: time="2025-01-15T13:51:28.175502192Z" level=info msg="StartContainer for \"74d2b0c538709ca1ecef307e111fbc480471e464d9036059b06ba94b21a4ec4d\" returns successfully" Jan 15 13:51:28.553417 kubelet[2737]: I0115 13:51:28.552538 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7jshj" podStartSLOduration=5.552488022 podStartE2EDuration="5.552488022s" podCreationTimestamp="2025-01-15 13:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:51:25.528795584 +0000 UTC m=+14.336243964" watchObservedRunningTime="2025-01-15 13:51:28.552488022 +0000 UTC m=+17.359936401" Jan 15 13:51:31.451636 kubelet[2737]: I0115 13:51:31.451579 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-544wj" podStartSLOduration=4.077935562 podStartE2EDuration="7.451523839s" podCreationTimestamp="2025-01-15 13:51:24 +0000 UTC" firstStartedPulling="2025-01-15 13:51:24.651113983 +0000 UTC m=+13.458562344" lastFinishedPulling="2025-01-15 13:51:28.024702265 +0000 UTC m=+16.832150621" observedRunningTime="2025-01-15 13:51:28.552873165 +0000 UTC m=+17.360321554" watchObservedRunningTime="2025-01-15 13:51:31.451523839 +0000 UTC m=+20.258972224" Jan 15 13:51:31.791786 kubelet[2737]: I0115 13:51:31.791025 2737 topology_manager.go:215] "Topology Admit Handler" podUID="14e52655-00e9-400f-9080-0c860ee484b7" podNamespace="calico-system" podName="calico-typha-866f5645d9-rddph" Jan 15 13:51:31.807404 systemd[1]: Created slice kubepods-besteffort-pod14e52655_00e9_400f_9080_0c860ee484b7.slice - libcontainer container kubepods-besteffort-pod14e52655_00e9_400f_9080_0c860ee484b7.slice. Jan 15 13:51:31.845378 kubelet[2737]: I0115 13:51:31.845119 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj5xd\" (UniqueName: \"kubernetes.io/projected/14e52655-00e9-400f-9080-0c860ee484b7-kube-api-access-gj5xd\") pod \"calico-typha-866f5645d9-rddph\" (UID: \"14e52655-00e9-400f-9080-0c860ee484b7\") " pod="calico-system/calico-typha-866f5645d9-rddph" Jan 15 13:51:31.845378 kubelet[2737]: I0115 13:51:31.845202 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/14e52655-00e9-400f-9080-0c860ee484b7-typha-certs\") pod \"calico-typha-866f5645d9-rddph\" (UID: \"14e52655-00e9-400f-9080-0c860ee484b7\") " pod="calico-system/calico-typha-866f5645d9-rddph" Jan 15 13:51:31.845378 kubelet[2737]: I0115 13:51:31.845239 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14e52655-00e9-400f-9080-0c860ee484b7-tigera-ca-bundle\") pod \"calico-typha-866f5645d9-rddph\" (UID: \"14e52655-00e9-400f-9080-0c860ee484b7\") " pod="calico-system/calico-typha-866f5645d9-rddph" Jan 15 13:51:31.877838 kubelet[2737]: I0115 13:51:31.876994 2737 topology_manager.go:215] "Topology Admit Handler" podUID="386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb" podNamespace="calico-system" podName="calico-node-x7kkg" Jan 15 13:51:31.889975 systemd[1]: Created slice kubepods-besteffort-pod386f4eff_f3b1_40d9_91d9_c03f7f2b8ddb.slice - libcontainer container kubepods-besteffort-pod386f4eff_f3b1_40d9_91d9_c03f7f2b8ddb.slice. Jan 15 13:51:31.945738 kubelet[2737]: I0115 13:51:31.945687 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-lib-modules\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:31.945967 kubelet[2737]: I0115 13:51:31.945755 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-cni-net-dir\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:31.945967 kubelet[2737]: I0115 13:51:31.945795 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-node-certs\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:31.945967 kubelet[2737]: I0115 13:51:31.945842 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-xtables-lock\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:31.945967 kubelet[2737]: I0115 13:51:31.945890 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-tigera-ca-bundle\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:31.945967 kubelet[2737]: I0115 13:51:31.945921 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-cni-bin-dir\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:31.946227 kubelet[2737]: I0115 13:51:31.945956 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-var-lib-calico\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:31.946227 kubelet[2737]: I0115 13:51:31.945992 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-flexvol-driver-host\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:31.946227 kubelet[2737]: I0115 13:51:31.946025 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnp26\" (UniqueName: \"kubernetes.io/projected/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-kube-api-access-qnp26\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:31.946227 kubelet[2737]: I0115 13:51:31.946083 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-policysync\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:31.946227 kubelet[2737]: I0115 13:51:31.946116 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-var-run-calico\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:31.946527 kubelet[2737]: I0115 13:51:31.946147 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb-cni-log-dir\") pod \"calico-node-x7kkg\" (UID: \"386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb\") " pod="calico-system/calico-node-x7kkg" Jan 15 13:51:32.004950 kubelet[2737]: I0115 13:51:32.002929 2737 topology_manager.go:215] "Topology Admit Handler" podUID="d4ec8923-4d40-4539-b7a9-8d3c151dc6d9" podNamespace="calico-system" podName="csi-node-driver-n4spm" Jan 15 13:51:32.004950 kubelet[2737]: E0115 13:51:32.003290 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4spm" podUID="d4ec8923-4d40-4539-b7a9-8d3c151dc6d9" Jan 15 13:51:32.048764 kubelet[2737]: I0115 13:51:32.048612 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d4ec8923-4d40-4539-b7a9-8d3c151dc6d9-socket-dir\") pod \"csi-node-driver-n4spm\" (UID: \"d4ec8923-4d40-4539-b7a9-8d3c151dc6d9\") " pod="calico-system/csi-node-driver-n4spm" Jan 15 13:51:32.048968 kubelet[2737]: I0115 13:51:32.048777 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d4ec8923-4d40-4539-b7a9-8d3c151dc6d9-registration-dir\") pod \"csi-node-driver-n4spm\" (UID: \"d4ec8923-4d40-4539-b7a9-8d3c151dc6d9\") " pod="calico-system/csi-node-driver-n4spm" Jan 15 13:51:32.048968 kubelet[2737]: I0115 13:51:32.048873 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d4ec8923-4d40-4539-b7a9-8d3c151dc6d9-varrun\") pod \"csi-node-driver-n4spm\" (UID: \"d4ec8923-4d40-4539-b7a9-8d3c151dc6d9\") " pod="calico-system/csi-node-driver-n4spm" Jan 15 13:51:32.048968 kubelet[2737]: I0115 13:51:32.048911 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5qgn\" (UniqueName: \"kubernetes.io/projected/d4ec8923-4d40-4539-b7a9-8d3c151dc6d9-kube-api-access-r5qgn\") pod \"csi-node-driver-n4spm\" (UID: \"d4ec8923-4d40-4539-b7a9-8d3c151dc6d9\") " pod="calico-system/csi-node-driver-n4spm" Jan 15 13:51:32.048968 kubelet[2737]: I0115 13:51:32.048947 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4ec8923-4d40-4539-b7a9-8d3c151dc6d9-kubelet-dir\") pod \"csi-node-driver-n4spm\" (UID: \"d4ec8923-4d40-4539-b7a9-8d3c151dc6d9\") " pod="calico-system/csi-node-driver-n4spm" Jan 15 13:51:32.051557 kubelet[2737]: E0115 13:51:32.051242 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.051557 kubelet[2737]: W0115 13:51:32.051282 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.052871 kubelet[2737]: E0115 13:51:32.052702 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.053738 kubelet[2737]: E0115 13:51:32.053716 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.053857 kubelet[2737]: W0115 13:51:32.053821 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.054001 kubelet[2737]: E0115 13:51:32.053981 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.054726 kubelet[2737]: E0115 13:51:32.054705 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.054955 kubelet[2737]: W0115 13:51:32.054917 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.055337 kubelet[2737]: E0115 13:51:32.055197 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.056503 kubelet[2737]: E0115 13:51:32.056149 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.056503 kubelet[2737]: W0115 13:51:32.056168 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.056503 kubelet[2737]: E0115 13:51:32.056188 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.060113 kubelet[2737]: E0115 13:51:32.060091 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.060225 kubelet[2737]: W0115 13:51:32.060194 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.060902 kubelet[2737]: E0115 13:51:32.060377 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.063696 kubelet[2737]: E0115 13:51:32.063497 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.063696 kubelet[2737]: W0115 13:51:32.063516 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.063951 kubelet[2737]: E0115 13:51:32.063917 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.069886 kubelet[2737]: E0115 13:51:32.069737 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.070191 kubelet[2737]: W0115 13:51:32.070001 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.070191 kubelet[2737]: E0115 13:51:32.070127 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.071198 kubelet[2737]: E0115 13:51:32.070887 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.071198 kubelet[2737]: W0115 13:51:32.070906 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.072291 kubelet[2737]: E0115 13:51:32.071412 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.072535 kubelet[2737]: E0115 13:51:32.072493 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.072535 kubelet[2737]: W0115 13:51:32.072512 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.072950 kubelet[2737]: E0115 13:51:32.072738 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.073112 kubelet[2737]: E0115 13:51:32.073093 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.073339 kubelet[2737]: W0115 13:51:32.073202 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.073543 kubelet[2737]: E0115 13:51:32.073453 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.073771 kubelet[2737]: E0115 13:51:32.073753 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.073986 kubelet[2737]: W0115 13:51:32.073891 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.075464 kubelet[2737]: E0115 13:51:32.075357 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.077282 kubelet[2737]: E0115 13:51:32.077261 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.077578 kubelet[2737]: W0115 13:51:32.077408 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.077578 kubelet[2737]: E0115 13:51:32.077438 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.078139 kubelet[2737]: E0115 13:51:32.077994 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.078139 kubelet[2737]: W0115 13:51:32.078013 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.078139 kubelet[2737]: E0115 13:51:32.078038 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.078809 kubelet[2737]: E0115 13:51:32.078553 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.078809 kubelet[2737]: W0115 13:51:32.078571 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.078809 kubelet[2737]: E0115 13:51:32.078596 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.079287 kubelet[2737]: E0115 13:51:32.079268 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.079440 kubelet[2737]: W0115 13:51:32.079419 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.079722 kubelet[2737]: E0115 13:51:32.079645 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.080038 kubelet[2737]: E0115 13:51:32.079989 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.080038 kubelet[2737]: W0115 13:51:32.080007 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.080512 kubelet[2737]: E0115 13:51:32.080339 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.080796 kubelet[2737]: E0115 13:51:32.080778 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.082423 kubelet[2737]: W0115 13:51:32.080930 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.082423 kubelet[2737]: E0115 13:51:32.082375 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.082987 kubelet[2737]: E0115 13:51:32.082869 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.082987 kubelet[2737]: W0115 13:51:32.082895 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.083200 kubelet[2737]: E0115 13:51:32.083117 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.083716 kubelet[2737]: E0115 13:51:32.083542 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.083716 kubelet[2737]: W0115 13:51:32.083561 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.083716 kubelet[2737]: E0115 13:51:32.083584 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.084134 kubelet[2737]: E0115 13:51:32.084115 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.084358 kubelet[2737]: W0115 13:51:32.084261 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.084358 kubelet[2737]: E0115 13:51:32.084291 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.101732 kubelet[2737]: E0115 13:51:32.101703 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.101959 kubelet[2737]: W0115 13:51:32.101883 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.101959 kubelet[2737]: E0115 13:51:32.101917 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.117792 containerd[1506]: time="2025-01-15T13:51:32.117736379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-866f5645d9-rddph,Uid:14e52655-00e9-400f-9080-0c860ee484b7,Namespace:calico-system,Attempt:0,}" Jan 15 13:51:32.151644 kubelet[2737]: E0115 13:51:32.151404 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.151644 kubelet[2737]: W0115 13:51:32.151436 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.151644 kubelet[2737]: E0115 13:51:32.151463 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.152537 kubelet[2737]: E0115 13:51:32.151809 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.152537 kubelet[2737]: W0115 13:51:32.151823 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.152537 kubelet[2737]: E0115 13:51:32.151876 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.154586 kubelet[2737]: E0115 13:51:32.154408 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.154586 kubelet[2737]: W0115 13:51:32.154428 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.154586 kubelet[2737]: E0115 13:51:32.154455 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.155484 kubelet[2737]: E0115 13:51:32.154751 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.155484 kubelet[2737]: W0115 13:51:32.154765 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.155484 kubelet[2737]: E0115 13:51:32.155065 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.155484 kubelet[2737]: E0115 13:51:32.155156 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.155484 kubelet[2737]: W0115 13:51:32.155168 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.155484 kubelet[2737]: E0115 13:51:32.155270 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.157754 kubelet[2737]: E0115 13:51:32.157543 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.157754 kubelet[2737]: W0115 13:51:32.157575 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.157754 kubelet[2737]: E0115 13:51:32.157688 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.158336 kubelet[2737]: E0115 13:51:32.158187 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.158336 kubelet[2737]: W0115 13:51:32.158206 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.159285 kubelet[2737]: E0115 13:51:32.159193 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.159664 kubelet[2737]: E0115 13:51:32.159609 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.159664 kubelet[2737]: W0115 13:51:32.159631 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.159996 kubelet[2737]: E0115 13:51:32.159975 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.160836 kubelet[2737]: E0115 13:51:32.160336 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.160836 kubelet[2737]: W0115 13:51:32.160355 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.160836 kubelet[2737]: E0115 13:51:32.160691 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.163407 kubelet[2737]: E0115 13:51:32.163385 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.163686 kubelet[2737]: W0115 13:51:32.163535 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.163812 kubelet[2737]: E0115 13:51:32.163780 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.164473 kubelet[2737]: E0115 13:51:32.164279 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.164473 kubelet[2737]: W0115 13:51:32.164317 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.164668 kubelet[2737]: E0115 13:51:32.164649 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.167057 kubelet[2737]: E0115 13:51:32.166568 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.167057 kubelet[2737]: W0115 13:51:32.166600 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.167357 kubelet[2737]: E0115 13:51:32.167334 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.169214 kubelet[2737]: E0115 13:51:32.169003 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.169214 kubelet[2737]: W0115 13:51:32.169022 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.169214 kubelet[2737]: E0115 13:51:32.169121 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.169796 kubelet[2737]: E0115 13:51:32.169777 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.169939 kubelet[2737]: W0115 13:51:32.169899 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.172442 kubelet[2737]: E0115 13:51:32.172411 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.173274 kubelet[2737]: E0115 13:51:32.173239 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.173274 kubelet[2737]: W0115 13:51:32.173270 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.174005 kubelet[2737]: E0115 13:51:32.173974 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.174441 kubelet[2737]: E0115 13:51:32.174402 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.175986 kubelet[2737]: W0115 13:51:32.174424 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.177509 kubelet[2737]: E0115 13:51:32.177471 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.181350 kubelet[2737]: E0115 13:51:32.179438 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.181350 kubelet[2737]: W0115 13:51:32.179460 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.181350 kubelet[2737]: E0115 13:51:32.179596 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.181350 kubelet[2737]: E0115 13:51:32.179804 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.181350 kubelet[2737]: W0115 13:51:32.179817 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.181350 kubelet[2737]: E0115 13:51:32.180177 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.181350 kubelet[2737]: E0115 13:51:32.180597 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.181350 kubelet[2737]: W0115 13:51:32.180881 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.181350 kubelet[2737]: E0115 13:51:32.181005 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.183391 kubelet[2737]: E0115 13:51:32.183364 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.183463 kubelet[2737]: W0115 13:51:32.183395 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.183525 kubelet[2737]: E0115 13:51:32.183504 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.183786 kubelet[2737]: E0115 13:51:32.183749 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.183786 kubelet[2737]: W0115 13:51:32.183781 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.186705 kubelet[2737]: E0115 13:51:32.186668 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.187050 kubelet[2737]: E0115 13:51:32.187025 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.187050 kubelet[2737]: W0115 13:51:32.187047 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.187478 kubelet[2737]: E0115 13:51:32.187440 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.190252 kubelet[2737]: E0115 13:51:32.190209 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.190252 kubelet[2737]: W0115 13:51:32.190232 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.190813 kubelet[2737]: E0115 13:51:32.190702 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.191496 kubelet[2737]: E0115 13:51:32.191468 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.191496 kubelet[2737]: W0115 13:51:32.191490 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.191628 kubelet[2737]: E0115 13:51:32.191510 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.193951 kubelet[2737]: E0115 13:51:32.193923 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.201571 kubelet[2737]: W0115 13:51:32.201518 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.201698 kubelet[2737]: E0115 13:51:32.201577 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.207328 containerd[1506]: time="2025-01-15T13:51:32.203877971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x7kkg,Uid:386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb,Namespace:calico-system,Attempt:0,}" Jan 15 13:51:32.213342 kubelet[2737]: E0115 13:51:32.212493 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:51:32.213342 kubelet[2737]: W0115 13:51:32.212519 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:51:32.213342 kubelet[2737]: E0115 13:51:32.212545 2737 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:51:32.233553 containerd[1506]: time="2025-01-15T13:51:32.230800315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:51:32.233553 containerd[1506]: time="2025-01-15T13:51:32.230939170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:51:32.233553 containerd[1506]: time="2025-01-15T13:51:32.230958830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:32.233553 containerd[1506]: time="2025-01-15T13:51:32.231115307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:32.298586 systemd[1]: Started cri-containerd-83b6ee55ab64375883be17e13c61794327a2750ed862e840e6f58a468bbd6464.scope - libcontainer container 83b6ee55ab64375883be17e13c61794327a2750ed862e840e6f58a468bbd6464. Jan 15 13:51:32.321045 containerd[1506]: time="2025-01-15T13:51:32.320921060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:51:32.322099 containerd[1506]: time="2025-01-15T13:51:32.321854169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:51:32.322099 containerd[1506]: time="2025-01-15T13:51:32.321925824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:32.322754 containerd[1506]: time="2025-01-15T13:51:32.322148907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:51:32.373533 systemd[1]: Started cri-containerd-f4e6cde903d87b436c39da9e1787942acd164351505d7515371b809d1c02f2f0.scope - libcontainer container f4e6cde903d87b436c39da9e1787942acd164351505d7515371b809d1c02f2f0. Jan 15 13:51:32.460893 containerd[1506]: time="2025-01-15T13:51:32.460711009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x7kkg,Uid:386f4eff-f3b1-40d9-91d9-c03f7f2b8ddb,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4e6cde903d87b436c39da9e1787942acd164351505d7515371b809d1c02f2f0\"" Jan 15 13:51:32.464497 containerd[1506]: time="2025-01-15T13:51:32.464441110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 15 13:51:32.471490 containerd[1506]: time="2025-01-15T13:51:32.471231994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-866f5645d9-rddph,Uid:14e52655-00e9-400f-9080-0c860ee484b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"83b6ee55ab64375883be17e13c61794327a2750ed862e840e6f58a468bbd6464\"" Jan 15 13:51:33.424069 kubelet[2737]: E0115 13:51:33.423147 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4spm" podUID="d4ec8923-4d40-4539-b7a9-8d3c151dc6d9" Jan 15 13:51:35.088085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1139221425.mount: Deactivated successfully. Jan 15 13:51:35.225841 containerd[1506]: time="2025-01-15T13:51:35.225745869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:35.227016 containerd[1506]: time="2025-01-15T13:51:35.226925535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 15 13:51:35.227887 containerd[1506]: time="2025-01-15T13:51:35.227828310Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:35.230736 containerd[1506]: time="2025-01-15T13:51:35.230672852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:35.232021 containerd[1506]: time="2025-01-15T13:51:35.231848901Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.767162262s" Jan 15 13:51:35.232021 containerd[1506]: time="2025-01-15T13:51:35.231896704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 15 13:51:35.233509 containerd[1506]: time="2025-01-15T13:51:35.233136286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 15 13:51:35.235916 containerd[1506]: time="2025-01-15T13:51:35.235544902Z" level=info msg="CreateContainer within sandbox \"f4e6cde903d87b436c39da9e1787942acd164351505d7515371b809d1c02f2f0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 13:51:35.272952 containerd[1506]: time="2025-01-15T13:51:35.272756660Z" level=info msg="CreateContainer within sandbox \"f4e6cde903d87b436c39da9e1787942acd164351505d7515371b809d1c02f2f0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bf66bbac1cd9b14f96df7ce8b79e03bff01c94cc1e6f910bab260798a631f041\"" Jan 15 13:51:35.275429 containerd[1506]: time="2025-01-15T13:51:35.274100695Z" level=info msg="StartContainer for \"bf66bbac1cd9b14f96df7ce8b79e03bff01c94cc1e6f910bab260798a631f041\"" Jan 15 13:51:35.345538 systemd[1]: Started cri-containerd-bf66bbac1cd9b14f96df7ce8b79e03bff01c94cc1e6f910bab260798a631f041.scope - libcontainer container bf66bbac1cd9b14f96df7ce8b79e03bff01c94cc1e6f910bab260798a631f041. Jan 15 13:51:35.424830 kubelet[2737]: E0115 13:51:35.423729 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4spm" podUID="d4ec8923-4d40-4539-b7a9-8d3c151dc6d9" Jan 15 13:51:35.435563 systemd[1]: cri-containerd-bf66bbac1cd9b14f96df7ce8b79e03bff01c94cc1e6f910bab260798a631f041.scope: Deactivated successfully. Jan 15 13:51:35.483222 containerd[1506]: time="2025-01-15T13:51:35.483162276Z" level=info msg="StartContainer for \"bf66bbac1cd9b14f96df7ce8b79e03bff01c94cc1e6f910bab260798a631f041\" returns successfully" Jan 15 13:51:35.570706 containerd[1506]: time="2025-01-15T13:51:35.537743830Z" level=info msg="shim disconnected" id=bf66bbac1cd9b14f96df7ce8b79e03bff01c94cc1e6f910bab260798a631f041 namespace=k8s.io Jan 15 13:51:35.570974 containerd[1506]: time="2025-01-15T13:51:35.570713902Z" level=warning msg="cleaning up after shim disconnected" id=bf66bbac1cd9b14f96df7ce8b79e03bff01c94cc1e6f910bab260798a631f041 namespace=k8s.io Jan 15 13:51:35.570974 containerd[1506]: time="2025-01-15T13:51:35.570743208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 13:51:35.599447 containerd[1506]: time="2025-01-15T13:51:35.598917133Z" level=warning msg="cleanup warnings time=\"2025-01-15T13:51:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 15 13:51:36.035296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf66bbac1cd9b14f96df7ce8b79e03bff01c94cc1e6f910bab260798a631f041-rootfs.mount: Deactivated successfully. Jan 15 13:51:37.424427 kubelet[2737]: E0115 13:51:37.422829 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4spm" podUID="d4ec8923-4d40-4539-b7a9-8d3c151dc6d9" Jan 15 13:51:38.369435 containerd[1506]: time="2025-01-15T13:51:38.369371149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:38.370775 containerd[1506]: time="2025-01-15T13:51:38.370485725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 15 13:51:38.371718 containerd[1506]: time="2025-01-15T13:51:38.371644912Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:38.374704 containerd[1506]: time="2025-01-15T13:51:38.374643237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:38.376044 containerd[1506]: time="2025-01-15T13:51:38.375875229Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.142660167s" Jan 15 13:51:38.376044 containerd[1506]: time="2025-01-15T13:51:38.375918077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 15 13:51:38.378822 containerd[1506]: time="2025-01-15T13:51:38.378784056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 15 13:51:38.405829 containerd[1506]: time="2025-01-15T13:51:38.405718980Z" level=info msg="CreateContainer within sandbox \"83b6ee55ab64375883be17e13c61794327a2750ed862e840e6f58a468bbd6464\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 13:51:38.424114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645205214.mount: Deactivated successfully. Jan 15 13:51:38.428202 containerd[1506]: time="2025-01-15T13:51:38.428063106Z" level=info msg="CreateContainer within sandbox \"83b6ee55ab64375883be17e13c61794327a2750ed862e840e6f58a468bbd6464\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"eedfe2fd73ffa9e23a3cbaf8bdb36e1c89b387346103702576f6907db5bbe199\"" Jan 15 13:51:38.428806 containerd[1506]: time="2025-01-15T13:51:38.428672304Z" level=info msg="StartContainer for \"eedfe2fd73ffa9e23a3cbaf8bdb36e1c89b387346103702576f6907db5bbe199\"" Jan 15 13:51:38.501534 systemd[1]: Started cri-containerd-eedfe2fd73ffa9e23a3cbaf8bdb36e1c89b387346103702576f6907db5bbe199.scope - libcontainer container eedfe2fd73ffa9e23a3cbaf8bdb36e1c89b387346103702576f6907db5bbe199. Jan 15 13:51:38.586955 containerd[1506]: time="2025-01-15T13:51:38.586903831Z" level=info msg="StartContainer for \"eedfe2fd73ffa9e23a3cbaf8bdb36e1c89b387346103702576f6907db5bbe199\" returns successfully" Jan 15 13:51:38.662007 kubelet[2737]: I0115 13:51:38.661375 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-866f5645d9-rddph" podStartSLOduration=1.758304523 podStartE2EDuration="7.661262799s" podCreationTimestamp="2025-01-15 13:51:31 +0000 UTC" firstStartedPulling="2025-01-15 13:51:32.473874128 +0000 UTC m=+21.281322491" lastFinishedPulling="2025-01-15 13:51:38.376832368 +0000 UTC m=+27.184280767" observedRunningTime="2025-01-15 13:51:38.659263194 +0000 UTC m=+27.466711574" watchObservedRunningTime="2025-01-15 13:51:38.661262799 +0000 UTC m=+27.468711165" Jan 15 13:51:39.424248 kubelet[2737]: E0115 13:51:39.422677 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4spm" podUID="d4ec8923-4d40-4539-b7a9-8d3c151dc6d9" Jan 15 13:51:41.424565 kubelet[2737]: E0115 13:51:41.424137 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4spm" podUID="d4ec8923-4d40-4539-b7a9-8d3c151dc6d9" Jan 15 13:51:43.424535 kubelet[2737]: E0115 13:51:43.424498 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4spm" podUID="d4ec8923-4d40-4539-b7a9-8d3c151dc6d9" Jan 15 13:51:44.685692 containerd[1506]: time="2025-01-15T13:51:44.685408835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:44.687017 containerd[1506]: time="2025-01-15T13:51:44.686744015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 15 13:51:44.687944 containerd[1506]: time="2025-01-15T13:51:44.687845629Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:44.692634 containerd[1506]: time="2025-01-15T13:51:44.692192924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:44.693420 containerd[1506]: time="2025-01-15T13:51:44.693380803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.314545489s" Jan 15 13:51:44.693522 containerd[1506]: time="2025-01-15T13:51:44.693428738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 15 13:51:44.698691 containerd[1506]: time="2025-01-15T13:51:44.698617869Z" level=info msg="CreateContainer within sandbox \"f4e6cde903d87b436c39da9e1787942acd164351505d7515371b809d1c02f2f0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 13:51:44.719052 containerd[1506]: time="2025-01-15T13:51:44.718977661Z" level=info msg="CreateContainer within sandbox \"f4e6cde903d87b436c39da9e1787942acd164351505d7515371b809d1c02f2f0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"26e193add7647cc58156277ef883aba36388993da47519bdb37b6a4b3520f357\"" Jan 15 13:51:44.720213 containerd[1506]: time="2025-01-15T13:51:44.720178470Z" level=info msg="StartContainer for \"26e193add7647cc58156277ef883aba36388993da47519bdb37b6a4b3520f357\"" Jan 15 13:51:44.794443 systemd[1]: run-containerd-runc-k8s.io-26e193add7647cc58156277ef883aba36388993da47519bdb37b6a4b3520f357-runc.wIqhCy.mount: Deactivated successfully. Jan 15 13:51:44.804643 systemd[1]: Started cri-containerd-26e193add7647cc58156277ef883aba36388993da47519bdb37b6a4b3520f357.scope - libcontainer container 26e193add7647cc58156277ef883aba36388993da47519bdb37b6a4b3520f357. Jan 15 13:51:44.855917 containerd[1506]: time="2025-01-15T13:51:44.855797547Z" level=info msg="StartContainer for \"26e193add7647cc58156277ef883aba36388993da47519bdb37b6a4b3520f357\" returns successfully" Jan 15 13:51:45.424102 kubelet[2737]: E0115 13:51:45.423620 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4spm" podUID="d4ec8923-4d40-4539-b7a9-8d3c151dc6d9" Jan 15 13:51:45.785209 systemd[1]: cri-containerd-26e193add7647cc58156277ef883aba36388993da47519bdb37b6a4b3520f357.scope: Deactivated successfully. Jan 15 13:51:45.828453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26e193add7647cc58156277ef883aba36388993da47519bdb37b6a4b3520f357-rootfs.mount: Deactivated successfully. Jan 15 13:51:45.881218 kubelet[2737]: I0115 13:51:45.879718 2737 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 15 13:51:45.937482 containerd[1506]: time="2025-01-15T13:51:45.937388522Z" level=info msg="shim disconnected" id=26e193add7647cc58156277ef883aba36388993da47519bdb37b6a4b3520f357 namespace=k8s.io Jan 15 13:51:45.938454 containerd[1506]: time="2025-01-15T13:51:45.938424603Z" level=warning msg="cleaning up after shim disconnected" id=26e193add7647cc58156277ef883aba36388993da47519bdb37b6a4b3520f357 namespace=k8s.io Jan 15 13:51:45.938714 containerd[1506]: time="2025-01-15T13:51:45.938636352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 13:51:45.962697 kubelet[2737]: I0115 13:51:45.962339 2737 topology_manager.go:215] "Topology Admit Handler" podUID="016d5cbd-c8e8-4eb2-acb8-556581579fdc" podNamespace="kube-system" podName="coredns-76f75df574-6kvqp" Jan 15 13:51:45.970325 kubelet[2737]: I0115 13:51:45.968909 2737 topology_manager.go:215] "Topology Admit Handler" podUID="630e22a6-6c6e-45a9-91a0-71d433560181" podNamespace="calico-apiserver" podName="calico-apiserver-84b56f5647-rqr5k" Jan 15 13:51:45.972326 kubelet[2737]: I0115 13:51:45.971671 2737 topology_manager.go:215] "Topology Admit Handler" podUID="d776d0fd-32e5-42e4-a525-94155ed739e3" podNamespace="calico-apiserver" podName="calico-apiserver-84b56f5647-2r2cz" Jan 15 13:51:45.980648 kubelet[2737]: I0115 13:51:45.979545 2737 topology_manager.go:215] "Topology Admit Handler" podUID="e98d2fb8-4dd9-4f33-9606-af6d41a36b1e" podNamespace="calico-system" podName="calico-kube-controllers-7599984bdf-btpzm" Jan 15 13:51:45.982832 kubelet[2737]: I0115 13:51:45.982666 2737 topology_manager.go:215] "Topology Admit Handler" podUID="82e5b51c-288b-4a98-8331-b0a7cd6134e0" podNamespace="kube-system" podName="coredns-76f75df574-kgv74" Jan 15 13:51:45.991158 systemd[1]: Created slice kubepods-burstable-pod016d5cbd_c8e8_4eb2_acb8_556581579fdc.slice - libcontainer container kubepods-burstable-pod016d5cbd_c8e8_4eb2_acb8_556581579fdc.slice. Jan 15 13:51:46.008528 systemd[1]: Created slice kubepods-besteffort-podd776d0fd_32e5_42e4_a525_94155ed739e3.slice - libcontainer container kubepods-besteffort-podd776d0fd_32e5_42e4_a525_94155ed739e3.slice. Jan 15 13:51:46.020668 systemd[1]: Created slice kubepods-besteffort-pod630e22a6_6c6e_45a9_91a0_71d433560181.slice - libcontainer container kubepods-besteffort-pod630e22a6_6c6e_45a9_91a0_71d433560181.slice. Jan 15 13:51:46.032983 systemd[1]: Created slice kubepods-besteffort-pode98d2fb8_4dd9_4f33_9606_af6d41a36b1e.slice - libcontainer container kubepods-besteffort-pode98d2fb8_4dd9_4f33_9606_af6d41a36b1e.slice. Jan 15 13:51:46.043983 systemd[1]: Created slice kubepods-burstable-pod82e5b51c_288b_4a98_8331_b0a7cd6134e0.slice - libcontainer container kubepods-burstable-pod82e5b51c_288b_4a98_8331_b0a7cd6134e0.slice. Jan 15 13:51:46.077445 kubelet[2737]: I0115 13:51:46.077401 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e98d2fb8-4dd9-4f33-9606-af6d41a36b1e-tigera-ca-bundle\") pod \"calico-kube-controllers-7599984bdf-btpzm\" (UID: \"e98d2fb8-4dd9-4f33-9606-af6d41a36b1e\") " pod="calico-system/calico-kube-controllers-7599984bdf-btpzm" Jan 15 13:51:46.077445 kubelet[2737]: I0115 13:51:46.077471 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgdlm\" (UniqueName: \"kubernetes.io/projected/e98d2fb8-4dd9-4f33-9606-af6d41a36b1e-kube-api-access-bgdlm\") pod \"calico-kube-controllers-7599984bdf-btpzm\" (UID: \"e98d2fb8-4dd9-4f33-9606-af6d41a36b1e\") " pod="calico-system/calico-kube-controllers-7599984bdf-btpzm" Jan 15 13:51:46.077783 kubelet[2737]: I0115 13:51:46.077514 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb4df\" (UniqueName: \"kubernetes.io/projected/d776d0fd-32e5-42e4-a525-94155ed739e3-kube-api-access-bb4df\") pod \"calico-apiserver-84b56f5647-2r2cz\" (UID: \"d776d0fd-32e5-42e4-a525-94155ed739e3\") " pod="calico-apiserver/calico-apiserver-84b56f5647-2r2cz" Jan 15 13:51:46.077783 kubelet[2737]: I0115 13:51:46.077598 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/630e22a6-6c6e-45a9-91a0-71d433560181-calico-apiserver-certs\") pod \"calico-apiserver-84b56f5647-rqr5k\" (UID: \"630e22a6-6c6e-45a9-91a0-71d433560181\") " pod="calico-apiserver/calico-apiserver-84b56f5647-rqr5k" Jan 15 13:51:46.077783 kubelet[2737]: I0115 13:51:46.077641 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r626f\" (UniqueName: \"kubernetes.io/projected/82e5b51c-288b-4a98-8331-b0a7cd6134e0-kube-api-access-r626f\") pod \"coredns-76f75df574-kgv74\" (UID: \"82e5b51c-288b-4a98-8331-b0a7cd6134e0\") " pod="kube-system/coredns-76f75df574-kgv74" Jan 15 13:51:46.077783 kubelet[2737]: I0115 13:51:46.077676 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82e5b51c-288b-4a98-8331-b0a7cd6134e0-config-volume\") pod \"coredns-76f75df574-kgv74\" (UID: \"82e5b51c-288b-4a98-8331-b0a7cd6134e0\") " pod="kube-system/coredns-76f75df574-kgv74" Jan 15 13:51:46.077783 kubelet[2737]: I0115 13:51:46.077726 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d776d0fd-32e5-42e4-a525-94155ed739e3-calico-apiserver-certs\") pod \"calico-apiserver-84b56f5647-2r2cz\" (UID: \"d776d0fd-32e5-42e4-a525-94155ed739e3\") " pod="calico-apiserver/calico-apiserver-84b56f5647-2r2cz" Jan 15 13:51:46.078039 kubelet[2737]: I0115 13:51:46.077771 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/016d5cbd-c8e8-4eb2-acb8-556581579fdc-config-volume\") pod \"coredns-76f75df574-6kvqp\" (UID: \"016d5cbd-c8e8-4eb2-acb8-556581579fdc\") " pod="kube-system/coredns-76f75df574-6kvqp" Jan 15 13:51:46.078039 kubelet[2737]: I0115 13:51:46.077807 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8nrx\" (UniqueName: \"kubernetes.io/projected/016d5cbd-c8e8-4eb2-acb8-556581579fdc-kube-api-access-k8nrx\") pod \"coredns-76f75df574-6kvqp\" (UID: \"016d5cbd-c8e8-4eb2-acb8-556581579fdc\") " pod="kube-system/coredns-76f75df574-6kvqp" Jan 15 13:51:46.078039 kubelet[2737]: I0115 13:51:46.077846 2737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp572\" (UniqueName: \"kubernetes.io/projected/630e22a6-6c6e-45a9-91a0-71d433560181-kube-api-access-hp572\") pod \"calico-apiserver-84b56f5647-rqr5k\" (UID: \"630e22a6-6c6e-45a9-91a0-71d433560181\") " pod="calico-apiserver/calico-apiserver-84b56f5647-rqr5k" Jan 15 13:51:46.302792 containerd[1506]: time="2025-01-15T13:51:46.302652344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6kvqp,Uid:016d5cbd-c8e8-4eb2-acb8-556581579fdc,Namespace:kube-system,Attempt:0,}" Jan 15 13:51:46.317511 containerd[1506]: time="2025-01-15T13:51:46.317067967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b56f5647-2r2cz,Uid:d776d0fd-32e5-42e4-a525-94155ed739e3,Namespace:calico-apiserver,Attempt:0,}" Jan 15 13:51:46.328951 containerd[1506]: time="2025-01-15T13:51:46.328670107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b56f5647-rqr5k,Uid:630e22a6-6c6e-45a9-91a0-71d433560181,Namespace:calico-apiserver,Attempt:0,}" Jan 15 13:51:46.348524 containerd[1506]: time="2025-01-15T13:51:46.347943178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7599984bdf-btpzm,Uid:e98d2fb8-4dd9-4f33-9606-af6d41a36b1e,Namespace:calico-system,Attempt:0,}" Jan 15 13:51:46.354375 containerd[1506]: time="2025-01-15T13:51:46.354001861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kgv74,Uid:82e5b51c-288b-4a98-8331-b0a7cd6134e0,Namespace:kube-system,Attempt:0,}" Jan 15 13:51:46.679089 containerd[1506]: time="2025-01-15T13:51:46.678866251Z" level=error msg="Failed to destroy network for sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.681433 containerd[1506]: time="2025-01-15T13:51:46.679193627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 15 13:51:46.705193 containerd[1506]: time="2025-01-15T13:51:46.705120597Z" level=error msg="Failed to destroy network for sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.712480 containerd[1506]: time="2025-01-15T13:51:46.711635253Z" level=error msg="encountered an error cleaning up failed sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.713747 containerd[1506]: time="2025-01-15T13:51:46.713508221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7599984bdf-btpzm,Uid:e98d2fb8-4dd9-4f33-9606-af6d41a36b1e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.715882 containerd[1506]: time="2025-01-15T13:51:46.713788891Z" level=error msg="encountered an error cleaning up failed sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.715997 containerd[1506]: time="2025-01-15T13:51:46.715820720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kgv74,Uid:82e5b51c-288b-4a98-8331-b0a7cd6134e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.723045 containerd[1506]: time="2025-01-15T13:51:46.713905083Z" level=error msg="Failed to destroy network for sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.723645 containerd[1506]: time="2025-01-15T13:51:46.723604206Z" level=error msg="encountered an error cleaning up failed sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.724133 containerd[1506]: time="2025-01-15T13:51:46.723935894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b56f5647-rqr5k,Uid:630e22a6-6c6e-45a9-91a0-71d433560181,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.724133 containerd[1506]: time="2025-01-15T13:51:46.714452353Z" level=error msg="Failed to destroy network for sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.724497 kubelet[2737]: E0115 13:51:46.724454 2737 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.727583 kubelet[2737]: E0115 13:51:46.724565 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b56f5647-rqr5k" Jan 15 13:51:46.727583 kubelet[2737]: E0115 13:51:46.724607 2737 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b56f5647-rqr5k" Jan 15 13:51:46.727583 kubelet[2737]: E0115 13:51:46.724684 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b56f5647-rqr5k_calico-apiserver(630e22a6-6c6e-45a9-91a0-71d433560181)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b56f5647-rqr5k_calico-apiserver(630e22a6-6c6e-45a9-91a0-71d433560181)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b56f5647-rqr5k" podUID="630e22a6-6c6e-45a9-91a0-71d433560181" Jan 15 13:51:46.728704 containerd[1506]: time="2025-01-15T13:51:46.726996610Z" level=error msg="encountered an error cleaning up failed sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.728704 containerd[1506]: time="2025-01-15T13:51:46.727059491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b56f5647-2r2cz,Uid:d776d0fd-32e5-42e4-a525-94155ed739e3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.728704 containerd[1506]: time="2025-01-15T13:51:46.727276657Z" level=error msg="Failed to destroy network for sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.728897 kubelet[2737]: E0115 13:51:46.724965 2737 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.728897 kubelet[2737]: E0115 13:51:46.725003 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7599984bdf-btpzm" Jan 15 13:51:46.728897 kubelet[2737]: E0115 13:51:46.725029 2737 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7599984bdf-btpzm" Jan 15 13:51:46.729967 kubelet[2737]: E0115 13:51:46.725080 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7599984bdf-btpzm_calico-system(e98d2fb8-4dd9-4f33-9606-af6d41a36b1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7599984bdf-btpzm_calico-system(e98d2fb8-4dd9-4f33-9606-af6d41a36b1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7599984bdf-btpzm" podUID="e98d2fb8-4dd9-4f33-9606-af6d41a36b1e" Jan 15 13:51:46.729967 kubelet[2737]: E0115 13:51:46.726459 2737 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.729967 kubelet[2737]: E0115 13:51:46.726594 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kgv74" Jan 15 13:51:46.730411 containerd[1506]: time="2025-01-15T13:51:46.729587847Z" level=error msg="encountered an error cleaning up failed sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.730411 containerd[1506]: time="2025-01-15T13:51:46.729639496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6kvqp,Uid:016d5cbd-c8e8-4eb2-acb8-556581579fdc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.730965 kubelet[2737]: E0115 13:51:46.726661 2737 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kgv74" Jan 15 13:51:46.730965 kubelet[2737]: E0115 13:51:46.727252 2737 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.730965 kubelet[2737]: E0115 13:51:46.727293 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b56f5647-2r2cz" Jan 15 13:51:46.730965 kubelet[2737]: E0115 13:51:46.727343 2737 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b56f5647-2r2cz" Jan 15 13:51:46.732405 kubelet[2737]: E0115 13:51:46.728286 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kgv74_kube-system(82e5b51c-288b-4a98-8331-b0a7cd6134e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kgv74_kube-system(82e5b51c-288b-4a98-8331-b0a7cd6134e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kgv74" podUID="82e5b51c-288b-4a98-8331-b0a7cd6134e0" Jan 15 13:51:46.732405 kubelet[2737]: E0115 13:51:46.728356 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b56f5647-2r2cz_calico-apiserver(d776d0fd-32e5-42e4-a525-94155ed739e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b56f5647-2r2cz_calico-apiserver(d776d0fd-32e5-42e4-a525-94155ed739e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b56f5647-2r2cz" podUID="d776d0fd-32e5-42e4-a525-94155ed739e3" Jan 15 13:51:46.732405 kubelet[2737]: E0115 13:51:46.729835 2737 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:46.732645 kubelet[2737]: E0115 13:51:46.729878 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-6kvqp" Jan 15 13:51:46.732645 kubelet[2737]: E0115 13:51:46.729910 2737 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-6kvqp" Jan 15 13:51:46.732645 kubelet[2737]: E0115 13:51:46.729971 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-6kvqp_kube-system(016d5cbd-c8e8-4eb2-acb8-556581579fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-6kvqp_kube-system(016d5cbd-c8e8-4eb2-acb8-556581579fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-6kvqp" podUID="016d5cbd-c8e8-4eb2-acb8-556581579fdc" Jan 15 13:51:47.432982 systemd[1]: Created slice kubepods-besteffort-podd4ec8923_4d40_4539_b7a9_8d3c151dc6d9.slice - libcontainer container kubepods-besteffort-podd4ec8923_4d40_4539_b7a9_8d3c151dc6d9.slice. Jan 15 13:51:47.436618 containerd[1506]: time="2025-01-15T13:51:47.436464644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n4spm,Uid:d4ec8923-4d40-4539-b7a9-8d3c151dc6d9,Namespace:calico-system,Attempt:0,}" Jan 15 13:51:47.524955 containerd[1506]: time="2025-01-15T13:51:47.524892482Z" level=error msg="Failed to destroy network for sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:47.529298 containerd[1506]: time="2025-01-15T13:51:47.527370164Z" level=error msg="encountered an error cleaning up failed sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:47.529298 containerd[1506]: time="2025-01-15T13:51:47.527449823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n4spm,Uid:d4ec8923-4d40-4539-b7a9-8d3c151dc6d9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:47.531208 kubelet[2737]: E0115 13:51:47.528886 2737 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:47.531208 kubelet[2737]: E0115 13:51:47.529070 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n4spm" Jan 15 13:51:47.531208 kubelet[2737]: E0115 13:51:47.529106 2737 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n4spm" Jan 15 13:51:47.531534 kubelet[2737]: E0115 13:51:47.529238 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n4spm_calico-system(d4ec8923-4d40-4539-b7a9-8d3c151dc6d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n4spm_calico-system(d4ec8923-4d40-4539-b7a9-8d3c151dc6d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n4spm" podUID="d4ec8923-4d40-4539-b7a9-8d3c151dc6d9" Jan 15 13:51:47.531927 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43-shm.mount: Deactivated successfully. Jan 15 13:51:47.678734 kubelet[2737]: I0115 13:51:47.678685 2737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:51:47.691442 kubelet[2737]: I0115 13:51:47.690826 2737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:51:47.692691 containerd[1506]: time="2025-01-15T13:51:47.692643582Z" level=info msg="StopPodSandbox for \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\"" Jan 15 13:51:47.694403 containerd[1506]: time="2025-01-15T13:51:47.694179560Z" level=info msg="Ensure that sandbox 64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a in task-service has been cleanup successfully" Jan 15 13:51:47.697936 containerd[1506]: time="2025-01-15T13:51:47.697876042Z" level=info msg="StopPodSandbox for \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\"" Jan 15 13:51:47.698727 containerd[1506]: time="2025-01-15T13:51:47.698153083Z" level=info msg="Ensure that sandbox b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7 in task-service has been cleanup successfully" Jan 15 13:51:47.725576 kubelet[2737]: I0115 13:51:47.725521 2737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:51:47.729643 containerd[1506]: time="2025-01-15T13:51:47.729265508Z" level=info msg="StopPodSandbox for \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\"" Jan 15 13:51:47.731261 containerd[1506]: time="2025-01-15T13:51:47.731223585Z" level=info msg="Ensure that sandbox e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af in task-service has been cleanup successfully" Jan 15 13:51:47.746941 kubelet[2737]: I0115 13:51:47.746892 2737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:51:47.749514 containerd[1506]: time="2025-01-15T13:51:47.749129309Z" level=info msg="StopPodSandbox for \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\"" Jan 15 13:51:47.751672 containerd[1506]: time="2025-01-15T13:51:47.750339456Z" level=info msg="Ensure that sandbox 5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141 in task-service has been cleanup successfully" Jan 15 13:51:47.753210 kubelet[2737]: I0115 13:51:47.753178 2737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:51:47.753949 containerd[1506]: time="2025-01-15T13:51:47.753864813Z" level=info msg="StopPodSandbox for \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\"" Jan 15 13:51:47.754177 containerd[1506]: time="2025-01-15T13:51:47.754136823Z" level=info msg="Ensure that sandbox 64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43 in task-service has been cleanup successfully" Jan 15 13:51:47.769564 kubelet[2737]: I0115 13:51:47.769495 2737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:51:47.775641 containerd[1506]: time="2025-01-15T13:51:47.775376116Z" level=info msg="StopPodSandbox for \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\"" Jan 15 13:51:47.775769 containerd[1506]: time="2025-01-15T13:51:47.775689752Z" level=info msg="Ensure that sandbox cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72 in task-service has been cleanup successfully" Jan 15 13:51:47.864169 containerd[1506]: time="2025-01-15T13:51:47.864093405Z" level=error msg="StopPodSandbox for \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\" failed" error="failed to destroy network for sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:47.865060 kubelet[2737]: E0115 13:51:47.864989 2737 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:51:47.865769 kubelet[2737]: E0115 13:51:47.865386 2737 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a"} Jan 15 13:51:47.865769 kubelet[2737]: E0115 13:51:47.865472 2737 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"016d5cbd-c8e8-4eb2-acb8-556581579fdc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:51:47.865769 kubelet[2737]: E0115 13:51:47.865732 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"016d5cbd-c8e8-4eb2-acb8-556581579fdc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-6kvqp" podUID="016d5cbd-c8e8-4eb2-acb8-556581579fdc" Jan 15 13:51:47.880080 containerd[1506]: time="2025-01-15T13:51:47.879751666Z" level=error msg="StopPodSandbox for \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\" failed" error="failed to destroy network for sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:47.880664 kubelet[2737]: E0115 13:51:47.880441 2737 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:51:47.880664 kubelet[2737]: E0115 13:51:47.880493 2737 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7"} Jan 15 13:51:47.880664 kubelet[2737]: E0115 13:51:47.880558 2737 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d776d0fd-32e5-42e4-a525-94155ed739e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:51:47.880664 kubelet[2737]: E0115 13:51:47.880615 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d776d0fd-32e5-42e4-a525-94155ed739e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b56f5647-2r2cz" podUID="d776d0fd-32e5-42e4-a525-94155ed739e3" Jan 15 13:51:47.881565 containerd[1506]: time="2025-01-15T13:51:47.881285198Z" level=error msg="StopPodSandbox for \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\" failed" error="failed to destroy network for sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:47.882161 kubelet[2737]: E0115 13:51:47.881975 2737 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:51:47.882395 kubelet[2737]: E0115 13:51:47.882026 2737 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af"} Jan 15 13:51:47.883384 kubelet[2737]: E0115 13:51:47.882520 2737 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"630e22a6-6c6e-45a9-91a0-71d433560181\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:51:47.883384 kubelet[2737]: E0115 13:51:47.882604 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"630e22a6-6c6e-45a9-91a0-71d433560181\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b56f5647-rqr5k" podUID="630e22a6-6c6e-45a9-91a0-71d433560181" Jan 15 13:51:47.888197 containerd[1506]: time="2025-01-15T13:51:47.888147215Z" level=error msg="StopPodSandbox for \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\" failed" error="failed to destroy network for sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:47.888698 kubelet[2737]: E0115 13:51:47.888515 2737 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:51:47.888698 kubelet[2737]: E0115 13:51:47.888562 2737 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141"} Jan 15 13:51:47.888698 kubelet[2737]: E0115 13:51:47.888618 2737 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e98d2fb8-4dd9-4f33-9606-af6d41a36b1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:51:47.888698 kubelet[2737]: E0115 13:51:47.888655 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e98d2fb8-4dd9-4f33-9606-af6d41a36b1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7599984bdf-btpzm" podUID="e98d2fb8-4dd9-4f33-9606-af6d41a36b1e" Jan 15 13:51:47.895135 containerd[1506]: time="2025-01-15T13:51:47.895081778Z" level=error msg="StopPodSandbox for \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\" failed" error="failed to destroy network for sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:47.895686 kubelet[2737]: E0115 13:51:47.895286 2737 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:51:47.895686 kubelet[2737]: E0115 13:51:47.895385 2737 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43"} Jan 15 13:51:47.895686 kubelet[2737]: E0115 13:51:47.895456 2737 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4ec8923-4d40-4539-b7a9-8d3c151dc6d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:51:47.895686 kubelet[2737]: E0115 13:51:47.895496 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4ec8923-4d40-4539-b7a9-8d3c151dc6d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n4spm" podUID="d4ec8923-4d40-4539-b7a9-8d3c151dc6d9" Jan 15 13:51:47.898146 containerd[1506]: time="2025-01-15T13:51:47.898069722Z" level=error msg="StopPodSandbox for \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\" failed" error="failed to destroy network for sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:51:47.898383 kubelet[2737]: E0115 13:51:47.898344 2737 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:51:47.898520 kubelet[2737]: E0115 13:51:47.898400 2737 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72"} Jan 15 13:51:47.898520 kubelet[2737]: E0115 13:51:47.898445 2737 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82e5b51c-288b-4a98-8331-b0a7cd6134e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:51:47.898520 kubelet[2737]: E0115 13:51:47.898480 2737 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82e5b51c-288b-4a98-8331-b0a7cd6134e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kgv74" podUID="82e5b51c-288b-4a98-8331-b0a7cd6134e0" Jan 15 13:51:56.531509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580568509.mount: Deactivated successfully. Jan 15 13:51:56.685351 containerd[1506]: time="2025-01-15T13:51:56.683817036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 15 13:51:56.689178 containerd[1506]: time="2025-01-15T13:51:56.689128604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.008727335s" Jan 15 13:51:56.689393 containerd[1506]: time="2025-01-15T13:51:56.689360625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 15 13:51:56.714026 containerd[1506]: time="2025-01-15T13:51:56.713708748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:56.753591 containerd[1506]: time="2025-01-15T13:51:56.753519992Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:56.759083 containerd[1506]: time="2025-01-15T13:51:56.758991497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:51:56.801965 containerd[1506]: time="2025-01-15T13:51:56.801699865Z" level=info msg="CreateContainer within sandbox \"f4e6cde903d87b436c39da9e1787942acd164351505d7515371b809d1c02f2f0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 13:51:56.884440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount432469630.mount: Deactivated successfully. Jan 15 13:51:56.898106 containerd[1506]: time="2025-01-15T13:51:56.897931417Z" level=info msg="CreateContainer within sandbox \"f4e6cde903d87b436c39da9e1787942acd164351505d7515371b809d1c02f2f0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e341b322c066fbfd6e2a851efa578d6a82bd1b165af3b8c3e339f7e392526605\"" Jan 15 13:51:56.906072 containerd[1506]: time="2025-01-15T13:51:56.905351029Z" level=info msg="StartContainer for \"e341b322c066fbfd6e2a851efa578d6a82bd1b165af3b8c3e339f7e392526605\"" Jan 15 13:51:57.038564 systemd[1]: Started cri-containerd-e341b322c066fbfd6e2a851efa578d6a82bd1b165af3b8c3e339f7e392526605.scope - libcontainer container e341b322c066fbfd6e2a851efa578d6a82bd1b165af3b8c3e339f7e392526605. Jan 15 13:51:57.104553 containerd[1506]: time="2025-01-15T13:51:57.104181146Z" level=info msg="StartContainer for \"e341b322c066fbfd6e2a851efa578d6a82bd1b165af3b8c3e339f7e392526605\" returns successfully" Jan 15 13:51:57.411288 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 13:51:57.413342 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 13:51:57.886504 kubelet[2737]: I0115 13:51:57.886454 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-x7kkg" podStartSLOduration=2.615609003 podStartE2EDuration="26.841791216s" podCreationTimestamp="2025-01-15 13:51:31 +0000 UTC" firstStartedPulling="2025-01-15 13:51:32.463773925 +0000 UTC m=+21.271222285" lastFinishedPulling="2025-01-15 13:51:56.689956136 +0000 UTC m=+45.497404498" observedRunningTime="2025-01-15 13:51:57.834619675 +0000 UTC m=+46.642068094" watchObservedRunningTime="2025-01-15 13:51:57.841791216 +0000 UTC m=+46.649239579" Jan 15 13:51:58.810096 kubelet[2737]: I0115 13:51:58.810021 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 13:51:59.407534 kernel: bpftool[3927]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 15 13:51:59.432058 containerd[1506]: time="2025-01-15T13:51:59.429847450Z" level=info msg="StopPodSandbox for \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\"" Jan 15 13:51:59.437982 containerd[1506]: time="2025-01-15T13:51:59.436734837Z" level=info msg="StopPodSandbox for \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\"" Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.580 [INFO][3955] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.581 [INFO][3955] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" iface="eth0" netns="/var/run/netns/cni-aa0fb40e-1e5c-0831-854d-7cbd8e649b5b" Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.582 [INFO][3955] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" iface="eth0" netns="/var/run/netns/cni-aa0fb40e-1e5c-0831-854d-7cbd8e649b5b" Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.583 [INFO][3955] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" iface="eth0" netns="/var/run/netns/cni-aa0fb40e-1e5c-0831-854d-7cbd8e649b5b" Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.584 [INFO][3955] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.584 [INFO][3955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.781 [INFO][3968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" HandleID="k8s-pod-network.64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.783 [INFO][3968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.784 [INFO][3968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.800 [WARNING][3968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" HandleID="k8s-pod-network.64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.800 [INFO][3968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" HandleID="k8s-pod-network.64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.803 [INFO][3968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:51:59.815532 containerd[1506]: 2025-01-15 13:51:59.806 [INFO][3955] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:51:59.823065 systemd[1]: run-netns-cni\x2daa0fb40e\x2d1e5c\x2d0831\x2d854d\x2d7cbd8e649b5b.mount: Deactivated successfully. Jan 15 13:51:59.835330 containerd[1506]: time="2025-01-15T13:51:59.834518325Z" level=info msg="TearDown network for sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\" successfully" Jan 15 13:51:59.835330 containerd[1506]: time="2025-01-15T13:51:59.834572249Z" level=info msg="StopPodSandbox for \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\" returns successfully" Jan 15 13:51:59.837859 containerd[1506]: time="2025-01-15T13:51:59.837826462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n4spm,Uid:d4ec8923-4d40-4539-b7a9-8d3c151dc6d9,Namespace:calico-system,Attempt:1,}" Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.575 [INFO][3954] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.575 [INFO][3954] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" iface="eth0" netns="/var/run/netns/cni-46ddcd20-0187-0ea9-5e9e-30c7de329d27" Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.576 [INFO][3954] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" iface="eth0" netns="/var/run/netns/cni-46ddcd20-0187-0ea9-5e9e-30c7de329d27" Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.577 [INFO][3954] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" iface="eth0" netns="/var/run/netns/cni-46ddcd20-0187-0ea9-5e9e-30c7de329d27" Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.578 [INFO][3954] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.578 [INFO][3954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.781 [INFO][3966] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" HandleID="k8s-pod-network.b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.783 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.803 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.826 [WARNING][3966] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" HandleID="k8s-pod-network.b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.826 [INFO][3966] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" HandleID="k8s-pod-network.b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.828 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:51:59.840529 containerd[1506]: 2025-01-15 13:51:59.835 [INFO][3954] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:51:59.844349 containerd[1506]: time="2025-01-15T13:51:59.843048811Z" level=info msg="TearDown network for sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\" successfully" Jan 15 13:51:59.844349 containerd[1506]: time="2025-01-15T13:51:59.843080688Z" level=info msg="StopPodSandbox for \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\" returns successfully" Jan 15 13:51:59.844349 containerd[1506]: time="2025-01-15T13:51:59.843781069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b56f5647-2r2cz,Uid:d776d0fd-32e5-42e4-a525-94155ed739e3,Namespace:calico-apiserver,Attempt:1,}" Jan 15 13:51:59.849748 systemd[1]: run-netns-cni\x2d46ddcd20\x2d0187\x2d0ea9\x2d5e9e\x2d30c7de329d27.mount: Deactivated successfully. Jan 15 13:51:59.943841 systemd-networkd[1432]: vxlan.calico: Link UP Jan 15 13:51:59.946012 systemd-networkd[1432]: vxlan.calico: Gained carrier Jan 15 13:52:00.343721 systemd-networkd[1432]: calic33739a91ee: Link UP Jan 15 13:52:00.347856 systemd-networkd[1432]: calic33739a91ee: Gained carrier Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.138 [INFO][3996] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0 calico-apiserver-84b56f5647- calico-apiserver d776d0fd-32e5-42e4-a525-94155ed739e3 765 0 2025-01-15 13:51:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84b56f5647 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-e1jz5.gb1.brightbox.com calico-apiserver-84b56f5647-2r2cz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic33739a91ee [] []}} ContainerID="961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-2r2cz" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.139 [INFO][3996] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-2r2cz" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.246 [INFO][4045] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" HandleID="k8s-pod-network.961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.274 [INFO][4045] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" HandleID="k8s-pod-network.961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039b560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-e1jz5.gb1.brightbox.com", "pod":"calico-apiserver-84b56f5647-2r2cz", "timestamp":"2025-01-15 13:52:00.246048956 +0000 UTC"}, Hostname:"srv-e1jz5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.274 [INFO][4045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.275 [INFO][4045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.275 [INFO][4045] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-e1jz5.gb1.brightbox.com' Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.278 [INFO][4045] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.293 [INFO][4045] ipam/ipam.go 372: Looking up existing affinities for host host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.299 [INFO][4045] ipam/ipam.go 489: Trying affinity for 192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.302 [INFO][4045] ipam/ipam.go 155: Attempting to load block cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.305 [INFO][4045] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.305 [INFO][4045] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.308 [INFO][4045] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9 Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.316 [INFO][4045] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.326 [INFO][4045] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.23.193/26] block=192.168.23.192/26 handle="k8s-pod-network.961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.327 [INFO][4045] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.193/26] handle="k8s-pod-network.961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.327 [INFO][4045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:00.396637 containerd[1506]: 2025-01-15 13:52:00.327 [INFO][4045] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.193/26] IPv6=[] ContainerID="961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" HandleID="k8s-pod-network.961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:00.402160 containerd[1506]: 2025-01-15 13:52:00.332 [INFO][3996] cni-plugin/k8s.go 386: Populated endpoint ContainerID="961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-2r2cz" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0", GenerateName:"calico-apiserver-84b56f5647-", Namespace:"calico-apiserver", SelfLink:"", UID:"d776d0fd-32e5-42e4-a525-94155ed739e3", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b56f5647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-84b56f5647-2r2cz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic33739a91ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:00.402160 containerd[1506]: 2025-01-15 13:52:00.334 [INFO][3996] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.23.193/32] ContainerID="961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-2r2cz" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:00.402160 containerd[1506]: 2025-01-15 13:52:00.335 [INFO][3996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic33739a91ee ContainerID="961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-2r2cz" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:00.402160 containerd[1506]: 2025-01-15 13:52:00.349 [INFO][3996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-2r2cz" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:00.402160 containerd[1506]: 2025-01-15 13:52:00.352 [INFO][3996] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-2r2cz" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0", GenerateName:"calico-apiserver-84b56f5647-", Namespace:"calico-apiserver", SelfLink:"", UID:"d776d0fd-32e5-42e4-a525-94155ed739e3", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b56f5647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9", Pod:"calico-apiserver-84b56f5647-2r2cz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic33739a91ee", MAC:"c2:5e:7c:a2:aa:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:00.402160 containerd[1506]: 2025-01-15 13:52:00.394 [INFO][3996] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-2r2cz" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:00.436538 containerd[1506]: time="2025-01-15T13:52:00.434664232Z" level=info msg="StopPodSandbox for \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\"" Jan 15 13:52:00.436538 containerd[1506]: time="2025-01-15T13:52:00.435261221Z" level=info msg="StopPodSandbox for \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\"" Jan 15 13:52:00.442512 containerd[1506]: time="2025-01-15T13:52:00.442145372Z" level=info msg="StopPodSandbox for \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\"" Jan 15 13:52:00.543041 systemd-networkd[1432]: cali3c81b2cc52f: Link UP Jan 15 13:52:00.544104 systemd-networkd[1432]: cali3c81b2cc52f: Gained carrier Jan 15 13:52:00.628434 containerd[1506]: time="2025-01-15T13:52:00.627745084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:52:00.628434 containerd[1506]: time="2025-01-15T13:52:00.627829154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:52:00.628434 containerd[1506]: time="2025-01-15T13:52:00.627876507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:00.628434 containerd[1506]: time="2025-01-15T13:52:00.628041756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.096 [INFO][3995] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0 csi-node-driver- calico-system d4ec8923-4d40-4539-b7a9-8d3c151dc6d9 764 0 2025-01-15 13:51:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-e1jz5.gb1.brightbox.com csi-node-driver-n4spm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3c81b2cc52f [] []}} ContainerID="92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" Namespace="calico-system" Pod="csi-node-driver-n4spm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.096 [INFO][3995] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" Namespace="calico-system" Pod="csi-node-driver-n4spm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.273 [INFO][4046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" HandleID="k8s-pod-network.92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.293 [INFO][4046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" HandleID="k8s-pod-network.92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f6ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-e1jz5.gb1.brightbox.com", "pod":"csi-node-driver-n4spm", "timestamp":"2025-01-15 13:52:00.273245002 +0000 UTC"}, Hostname:"srv-e1jz5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.293 [INFO][4046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.327 [INFO][4046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.327 [INFO][4046] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-e1jz5.gb1.brightbox.com' Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.330 [INFO][4046] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.339 [INFO][4046] ipam/ipam.go 372: Looking up existing affinities for host host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.358 [INFO][4046] ipam/ipam.go 489: Trying affinity for 192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.364 [INFO][4046] ipam/ipam.go 155: Attempting to load block cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.370 [INFO][4046] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.370 [INFO][4046] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.374 [INFO][4046] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648 Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.427 [INFO][4046] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.473 [INFO][4046] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.23.194/26] block=192.168.23.192/26 handle="k8s-pod-network.92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.473 [INFO][4046] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.194/26] handle="k8s-pod-network.92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.473 [INFO][4046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:00.633687 containerd[1506]: 2025-01-15 13:52:00.473 [INFO][4046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.194/26] IPv6=[] ContainerID="92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" HandleID="k8s-pod-network.92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:00.635676 containerd[1506]: 2025-01-15 13:52:00.520 [INFO][3995] cni-plugin/k8s.go 386: Populated endpoint ContainerID="92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" Namespace="calico-system" Pod="csi-node-driver-n4spm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4ec8923-4d40-4539-b7a9-8d3c151dc6d9", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-n4spm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c81b2cc52f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:00.635676 containerd[1506]: 2025-01-15 13:52:00.521 [INFO][3995] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.23.194/32] ContainerID="92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" Namespace="calico-system" Pod="csi-node-driver-n4spm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:00.635676 containerd[1506]: 2025-01-15 13:52:00.528 [INFO][3995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c81b2cc52f ContainerID="92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" Namespace="calico-system" Pod="csi-node-driver-n4spm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:00.635676 containerd[1506]: 2025-01-15 13:52:00.543 [INFO][3995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" Namespace="calico-system" Pod="csi-node-driver-n4spm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:00.635676 containerd[1506]: 2025-01-15 13:52:00.552 [INFO][3995] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" Namespace="calico-system" Pod="csi-node-driver-n4spm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4ec8923-4d40-4539-b7a9-8d3c151dc6d9", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648", Pod:"csi-node-driver-n4spm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c81b2cc52f", MAC:"8a:49:56:10:2f:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:00.635676 containerd[1506]: 2025-01-15 13:52:00.622 [INFO][3995] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648" Namespace="calico-system" Pod="csi-node-driver-n4spm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:00.703526 systemd[1]: Started cri-containerd-961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9.scope - libcontainer container 961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9. Jan 15 13:52:00.763412 containerd[1506]: time="2025-01-15T13:52:00.762568834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:52:00.763412 containerd[1506]: time="2025-01-15T13:52:00.762698659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:52:00.763412 containerd[1506]: time="2025-01-15T13:52:00.762724425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:00.763412 containerd[1506]: time="2025-01-15T13:52:00.762847667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:00.811336 kubelet[2737]: I0115 13:52:00.808233 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 13:52:00.935508 systemd[1]: Started cri-containerd-92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648.scope - libcontainer container 92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648. Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:00.739 [INFO][4114] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:00.739 [INFO][4114] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" iface="eth0" netns="/var/run/netns/cni-acd69c52-d013-e704-df3b-28f57c0a39e9" Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:00.739 [INFO][4114] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" iface="eth0" netns="/var/run/netns/cni-acd69c52-d013-e704-df3b-28f57c0a39e9" Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:00.758 [INFO][4114] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" iface="eth0" netns="/var/run/netns/cni-acd69c52-d013-e704-df3b-28f57c0a39e9" Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:00.758 [INFO][4114] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:00.758 [INFO][4114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:00.878 [INFO][4179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" HandleID="k8s-pod-network.e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:00.878 [INFO][4179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:00.878 [INFO][4179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:01.007 [WARNING][4179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" HandleID="k8s-pod-network.e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:01.007 [INFO][4179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" HandleID="k8s-pod-network.e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:01.019 [INFO][4179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:01.038344 containerd[1506]: 2025-01-15 13:52:01.025 [INFO][4114] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:01.040935 containerd[1506]: time="2025-01-15T13:52:01.039412607Z" level=info msg="TearDown network for sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\" successfully" Jan 15 13:52:01.041395 containerd[1506]: time="2025-01-15T13:52:01.041363892Z" level=info msg="StopPodSandbox for \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\" returns successfully" Jan 15 13:52:01.042611 systemd[1]: run-netns-cni\x2dacd69c52\x2dd013\x2de704\x2ddf3b\x2d28f57c0a39e9.mount: Deactivated successfully. Jan 15 13:52:01.048223 containerd[1506]: time="2025-01-15T13:52:01.047909542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b56f5647-rqr5k,Uid:630e22a6-6c6e-45a9-91a0-71d433560181,Namespace:calico-apiserver,Attempt:1,}" Jan 15 13:52:01.157477 containerd[1506]: time="2025-01-15T13:52:01.155519317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b56f5647-2r2cz,Uid:d776d0fd-32e5-42e4-a525-94155ed739e3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9\"" Jan 15 13:52:01.166652 containerd[1506]: time="2025-01-15T13:52:01.166223797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:00.830 [INFO][4118] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:00.830 [INFO][4118] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" iface="eth0" netns="/var/run/netns/cni-e0d2c748-4476-2976-67bb-b6b199b32bae" Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:00.831 [INFO][4118] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" iface="eth0" netns="/var/run/netns/cni-e0d2c748-4476-2976-67bb-b6b199b32bae" Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:00.831 [INFO][4118] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" iface="eth0" netns="/var/run/netns/cni-e0d2c748-4476-2976-67bb-b6b199b32bae" Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:00.832 [INFO][4118] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:00.832 [INFO][4118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:01.150 [INFO][4192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" HandleID="k8s-pod-network.cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:01.151 [INFO][4192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:01.151 [INFO][4192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:01.178 [WARNING][4192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" HandleID="k8s-pod-network.cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:01.178 [INFO][4192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" HandleID="k8s-pod-network.cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:01.180 [INFO][4192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:01.190542 containerd[1506]: 2025-01-15 13:52:01.185 [INFO][4118] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:01.193437 containerd[1506]: time="2025-01-15T13:52:01.192529973Z" level=info msg="TearDown network for sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\" successfully" Jan 15 13:52:01.193437 containerd[1506]: time="2025-01-15T13:52:01.192866503Z" level=info msg="StopPodSandbox for \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\" returns successfully" Jan 15 13:52:01.200255 containerd[1506]: time="2025-01-15T13:52:01.199897471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kgv74,Uid:82e5b51c-288b-4a98-8331-b0a7cd6134e0,Namespace:kube-system,Attempt:1,}" Jan 15 13:52:01.261959 containerd[1506]: time="2025-01-15T13:52:01.261896561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n4spm,Uid:d4ec8923-4d40-4539-b7a9-8d3c151dc6d9,Namespace:calico-system,Attempt:1,} returns sandbox id \"92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648\"" Jan 15 13:52:01.272555 systemd-networkd[1432]: vxlan.calico: Gained IPv6LL Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.065 [INFO][4120] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.069 [INFO][4120] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" iface="eth0" netns="/var/run/netns/cni-86f6c397-c5a3-3824-aee1-e77935fe16c9" Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.069 [INFO][4120] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" iface="eth0" netns="/var/run/netns/cni-86f6c397-c5a3-3824-aee1-e77935fe16c9" Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.073 [INFO][4120] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" iface="eth0" netns="/var/run/netns/cni-86f6c397-c5a3-3824-aee1-e77935fe16c9" Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.073 [INFO][4120] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.073 [INFO][4120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.231 [INFO][4226] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" HandleID="k8s-pod-network.64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.231 [INFO][4226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.231 [INFO][4226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.250 [WARNING][4226] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" HandleID="k8s-pod-network.64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.250 [INFO][4226] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" HandleID="k8s-pod-network.64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.262 [INFO][4226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:01.298368 containerd[1506]: 2025-01-15 13:52:01.285 [INFO][4120] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:01.300734 containerd[1506]: time="2025-01-15T13:52:01.300444825Z" level=info msg="TearDown network for sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\" successfully" Jan 15 13:52:01.300987 containerd[1506]: time="2025-01-15T13:52:01.300941927Z" level=info msg="StopPodSandbox for \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\" returns successfully" Jan 15 13:52:01.305154 containerd[1506]: time="2025-01-15T13:52:01.305116015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6kvqp,Uid:016d5cbd-c8e8-4eb2-acb8-556581579fdc,Namespace:kube-system,Attempt:1,}" Jan 15 13:52:01.473601 containerd[1506]: time="2025-01-15T13:52:01.472455941Z" level=info msg="StopPodSandbox for \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\"" Jan 15 13:52:01.613556 systemd-networkd[1432]: caliaa9cf520e22: Link UP Jan 15 13:52:01.629728 systemd-networkd[1432]: caliaa9cf520e22: Gained carrier Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.309 [INFO][4239] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0 calico-apiserver-84b56f5647- calico-apiserver 630e22a6-6c6e-45a9-91a0-71d433560181 777 0 2025-01-15 13:51:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84b56f5647 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-e1jz5.gb1.brightbox.com calico-apiserver-84b56f5647-rqr5k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa9cf520e22 [] []}} ContainerID="a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-rqr5k" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.310 [INFO][4239] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-rqr5k" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.455 [INFO][4287] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" HandleID="k8s-pod-network.a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.499 [INFO][4287] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" HandleID="k8s-pod-network.a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011b7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-e1jz5.gb1.brightbox.com", "pod":"calico-apiserver-84b56f5647-rqr5k", "timestamp":"2025-01-15 13:52:01.455558943 +0000 UTC"}, Hostname:"srv-e1jz5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.499 [INFO][4287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.499 [INFO][4287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.500 [INFO][4287] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-e1jz5.gb1.brightbox.com' Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.507 [INFO][4287] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.518 [INFO][4287] ipam/ipam.go 372: Looking up existing affinities for host host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.544 [INFO][4287] ipam/ipam.go 489: Trying affinity for 192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.552 [INFO][4287] ipam/ipam.go 155: Attempting to load block cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.557 [INFO][4287] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.558 [INFO][4287] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.561 [INFO][4287] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3 Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.571 [INFO][4287] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.584 [INFO][4287] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.23.195/26] block=192.168.23.192/26 handle="k8s-pod-network.a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.584 [INFO][4287] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.195/26] handle="k8s-pod-network.a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.584 [INFO][4287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:01.696979 containerd[1506]: 2025-01-15 13:52:01.584 [INFO][4287] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.195/26] IPv6=[] ContainerID="a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" HandleID="k8s-pod-network.a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:01.701565 containerd[1506]: 2025-01-15 13:52:01.595 [INFO][4239] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-rqr5k" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0", GenerateName:"calico-apiserver-84b56f5647-", Namespace:"calico-apiserver", SelfLink:"", UID:"630e22a6-6c6e-45a9-91a0-71d433560181", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b56f5647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-84b56f5647-rqr5k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa9cf520e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:01.701565 containerd[1506]: 2025-01-15 13:52:01.595 [INFO][4239] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.23.195/32] ContainerID="a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-rqr5k" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:01.701565 containerd[1506]: 2025-01-15 13:52:01.595 [INFO][4239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa9cf520e22 ContainerID="a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-rqr5k" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:01.701565 containerd[1506]: 2025-01-15 13:52:01.637 [INFO][4239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-rqr5k" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:01.701565 containerd[1506]: 2025-01-15 13:52:01.660 [INFO][4239] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-rqr5k" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0", GenerateName:"calico-apiserver-84b56f5647-", Namespace:"calico-apiserver", SelfLink:"", UID:"630e22a6-6c6e-45a9-91a0-71d433560181", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b56f5647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3", Pod:"calico-apiserver-84b56f5647-rqr5k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa9cf520e22", MAC:"96:79:03:7c:fb:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:01.701565 containerd[1506]: 2025-01-15 13:52:01.684 [INFO][4239] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3" Namespace="calico-apiserver" Pod="calico-apiserver-84b56f5647-rqr5k" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:01.830941 systemd-networkd[1432]: cali829a485b001: Link UP Jan 15 13:52:01.833792 systemd-networkd[1432]: cali829a485b001: Gained carrier Jan 15 13:52:01.844139 systemd[1]: run-netns-cni\x2de0d2c748\x2d4476\x2d2976\x2d67bb\x2db6b199b32bae.mount: Deactivated successfully. Jan 15 13:52:01.847791 systemd[1]: run-netns-cni\x2d86f6c397\x2dc5a3\x2d3824\x2daee1\x2de77935fe16c9.mount: Deactivated successfully. Jan 15 13:52:01.848544 systemd-networkd[1432]: calic33739a91ee: Gained IPv6LL Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.476 [INFO][4273] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0 coredns-76f75df574- kube-system 82e5b51c-288b-4a98-8331-b0a7cd6134e0 779 0 2025-01-15 13:51:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-e1jz5.gb1.brightbox.com coredns-76f75df574-kgv74 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali829a485b001 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" Namespace="kube-system" Pod="coredns-76f75df574-kgv74" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.476 [INFO][4273] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" Namespace="kube-system" Pod="coredns-76f75df574-kgv74" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.652 [INFO][4324] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" HandleID="k8s-pod-network.a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.700 [INFO][4324] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" HandleID="k8s-pod-network.a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000398a90), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-e1jz5.gb1.brightbox.com", "pod":"coredns-76f75df574-kgv74", "timestamp":"2025-01-15 13:52:01.652488811 +0000 UTC"}, Hostname:"srv-e1jz5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.701 [INFO][4324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.702 [INFO][4324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.702 [INFO][4324] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-e1jz5.gb1.brightbox.com' Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.708 [INFO][4324] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.724 [INFO][4324] ipam/ipam.go 372: Looking up existing affinities for host host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.752 [INFO][4324] ipam/ipam.go 489: Trying affinity for 192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.760 [INFO][4324] ipam/ipam.go 155: Attempting to load block cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.767 [INFO][4324] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.767 [INFO][4324] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.771 [INFO][4324] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7 Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.788 [INFO][4324] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.806 [INFO][4324] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.23.196/26] block=192.168.23.192/26 handle="k8s-pod-network.a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.807 [INFO][4324] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.196/26] handle="k8s-pod-network.a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.807 [INFO][4324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:01.915734 containerd[1506]: 2025-01-15 13:52:01.807 [INFO][4324] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.196/26] IPv6=[] ContainerID="a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" HandleID="k8s-pod-network.a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:01.919741 containerd[1506]: 2025-01-15 13:52:01.818 [INFO][4273] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" Namespace="kube-system" Pod="coredns-76f75df574-kgv74" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"82e5b51c-288b-4a98-8331-b0a7cd6134e0", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-kgv74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali829a485b001", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:01.919741 containerd[1506]: 2025-01-15 13:52:01.819 [INFO][4273] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.23.196/32] ContainerID="a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" Namespace="kube-system" Pod="coredns-76f75df574-kgv74" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:01.919741 containerd[1506]: 2025-01-15 13:52:01.819 [INFO][4273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali829a485b001 ContainerID="a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" Namespace="kube-system" Pod="coredns-76f75df574-kgv74" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:01.919741 containerd[1506]: 2025-01-15 13:52:01.834 [INFO][4273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" Namespace="kube-system" Pod="coredns-76f75df574-kgv74" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:01.919741 containerd[1506]: 2025-01-15 13:52:01.855 [INFO][4273] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" Namespace="kube-system" Pod="coredns-76f75df574-kgv74" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"82e5b51c-288b-4a98-8331-b0a7cd6134e0", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7", Pod:"coredns-76f75df574-kgv74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali829a485b001", MAC:"ee:9f:5a:2a:6a:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:01.919741 containerd[1506]: 2025-01-15 13:52:01.892 [INFO][4273] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7" Namespace="kube-system" Pod="coredns-76f75df574-kgv74" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:01.978892 systemd-networkd[1432]: cali3c81b2cc52f: Gained IPv6LL Jan 15 13:52:02.020022 containerd[1506]: time="2025-01-15T13:52:02.018161226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:52:02.020022 containerd[1506]: time="2025-01-15T13:52:02.018371351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:52:02.020022 containerd[1506]: time="2025-01-15T13:52:02.018465124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:02.024324 containerd[1506]: time="2025-01-15T13:52:02.019956969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:02.048613 containerd[1506]: time="2025-01-15T13:52:02.047646113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:52:02.049585 containerd[1506]: time="2025-01-15T13:52:02.048845413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:52:02.053794 containerd[1506]: time="2025-01-15T13:52:02.050287856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:02.055562 containerd[1506]: time="2025-01-15T13:52:02.054738359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:02.111230 systemd-networkd[1432]: calia76b48332ae: Link UP Jan 15 13:52:02.112636 systemd-networkd[1432]: calia76b48332ae: Gained carrier Jan 15 13:52:02.168500 systemd[1]: Started cri-containerd-a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7.scope - libcontainer container a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7. Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.565 [INFO][4288] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0 coredns-76f75df574- kube-system 016d5cbd-c8e8-4eb2-acb8-556581579fdc 780 0 2025-01-15 13:51:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-e1jz5.gb1.brightbox.com coredns-76f75df574-6kvqp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia76b48332ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" Namespace="kube-system" Pod="coredns-76f75df574-6kvqp" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.566 [INFO][4288] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" Namespace="kube-system" Pod="coredns-76f75df574-6kvqp" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.806 [INFO][4358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" HandleID="k8s-pod-network.9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.855 [INFO][4358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" HandleID="k8s-pod-network.9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051080), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-e1jz5.gb1.brightbox.com", "pod":"coredns-76f75df574-6kvqp", "timestamp":"2025-01-15 13:52:01.806530012 +0000 UTC"}, Hostname:"srv-e1jz5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.857 [INFO][4358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.860 [INFO][4358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.860 [INFO][4358] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-e1jz5.gb1.brightbox.com' Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.877 [INFO][4358] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.901 [INFO][4358] ipam/ipam.go 372: Looking up existing affinities for host host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.936 [INFO][4358] ipam/ipam.go 489: Trying affinity for 192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.943 [INFO][4358] ipam/ipam.go 155: Attempting to load block cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.954 [INFO][4358] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.957 [INFO][4358] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:01.966 [INFO][4358] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:02.027 [INFO][4358] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:02.071 [INFO][4358] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.23.197/26] block=192.168.23.192/26 handle="k8s-pod-network.9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:02.071 [INFO][4358] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.197/26] handle="k8s-pod-network.9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:02.071 [INFO][4358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:02.191458 containerd[1506]: 2025-01-15 13:52:02.071 [INFO][4358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.197/26] IPv6=[] ContainerID="9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" HandleID="k8s-pod-network.9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:02.192688 containerd[1506]: 2025-01-15 13:52:02.084 [INFO][4288] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" Namespace="kube-system" Pod="coredns-76f75df574-6kvqp" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"016d5cbd-c8e8-4eb2-acb8-556581579fdc", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-6kvqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia76b48332ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:02.192688 containerd[1506]: 2025-01-15 13:52:02.084 [INFO][4288] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.23.197/32] ContainerID="9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" Namespace="kube-system" Pod="coredns-76f75df574-6kvqp" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:02.192688 containerd[1506]: 2025-01-15 13:52:02.084 [INFO][4288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia76b48332ae ContainerID="9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" Namespace="kube-system" Pod="coredns-76f75df574-6kvqp" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:02.192688 containerd[1506]: 2025-01-15 13:52:02.111 [INFO][4288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" Namespace="kube-system" Pod="coredns-76f75df574-6kvqp" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:02.192688 containerd[1506]: 2025-01-15 13:52:02.121 [INFO][4288] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" Namespace="kube-system" Pod="coredns-76f75df574-6kvqp" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"016d5cbd-c8e8-4eb2-acb8-556581579fdc", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc", Pod:"coredns-76f75df574-6kvqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia76b48332ae", MAC:"72:cf:3b:a6:ef:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:02.192688 containerd[1506]: 2025-01-15 13:52:02.178 [INFO][4288] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc" Namespace="kube-system" Pod="coredns-76f75df574-6kvqp" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:02.220760 systemd[1]: Started cri-containerd-a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3.scope - libcontainer container a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3. Jan 15 13:52:02.323220 containerd[1506]: time="2025-01-15T13:52:02.322836592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kgv74,Uid:82e5b51c-288b-4a98-8331-b0a7cd6134e0,Namespace:kube-system,Attempt:1,} returns sandbox id \"a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7\"" Jan 15 13:52:02.334948 containerd[1506]: time="2025-01-15T13:52:02.334872782Z" level=info msg="CreateContainer within sandbox \"a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 13:52:02.336198 containerd[1506]: time="2025-01-15T13:52:02.334656721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:52:02.336198 containerd[1506]: time="2025-01-15T13:52:02.334739352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:52:02.336198 containerd[1506]: time="2025-01-15T13:52:02.334774787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:02.338278 containerd[1506]: time="2025-01-15T13:52:02.336028442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.126 [INFO][4353] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.126 [INFO][4353] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" iface="eth0" netns="/var/run/netns/cni-8c8914bc-6682-a1a9-0da6-4dfb2617507f" Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.127 [INFO][4353] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" iface="eth0" netns="/var/run/netns/cni-8c8914bc-6682-a1a9-0da6-4dfb2617507f" Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.145 [INFO][4353] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" iface="eth0" netns="/var/run/netns/cni-8c8914bc-6682-a1a9-0da6-4dfb2617507f" Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.146 [INFO][4353] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.146 [INFO][4353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.359 [INFO][4468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" HandleID="k8s-pod-network.5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.360 [INFO][4468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.361 [INFO][4468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.397 [WARNING][4468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" HandleID="k8s-pod-network.5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.397 [INFO][4468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" HandleID="k8s-pod-network.5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.405 [INFO][4468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:02.416490 containerd[1506]: 2025-01-15 13:52:02.414 [INFO][4353] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:02.418125 containerd[1506]: time="2025-01-15T13:52:02.417951584Z" level=info msg="TearDown network for sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\" successfully" Jan 15 13:52:02.418603 containerd[1506]: time="2025-01-15T13:52:02.418134802Z" level=info msg="StopPodSandbox for \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\" returns successfully" Jan 15 13:52:02.423490 containerd[1506]: time="2025-01-15T13:52:02.423431133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7599984bdf-btpzm,Uid:e98d2fb8-4dd9-4f33-9606-af6d41a36b1e,Namespace:calico-system,Attempt:1,}" Jan 15 13:52:02.443527 containerd[1506]: time="2025-01-15T13:52:02.443459914Z" level=info msg="CreateContainer within sandbox \"a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3950c335f9b10511a353e1291f68062ec9fdc98762346120e6500952f42ded70\"" Jan 15 13:52:02.445991 containerd[1506]: time="2025-01-15T13:52:02.445952409Z" level=info msg="StartContainer for \"3950c335f9b10511a353e1291f68062ec9fdc98762346120e6500952f42ded70\"" Jan 15 13:52:02.490534 systemd[1]: Started cri-containerd-9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc.scope - libcontainer container 9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc. Jan 15 13:52:02.579524 systemd[1]: Started cri-containerd-3950c335f9b10511a353e1291f68062ec9fdc98762346120e6500952f42ded70.scope - libcontainer container 3950c335f9b10511a353e1291f68062ec9fdc98762346120e6500952f42ded70. Jan 15 13:52:02.589889 containerd[1506]: time="2025-01-15T13:52:02.589697491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b56f5647-rqr5k,Uid:630e22a6-6c6e-45a9-91a0-71d433560181,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3\"" Jan 15 13:52:02.709195 containerd[1506]: time="2025-01-15T13:52:02.708877310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6kvqp,Uid:016d5cbd-c8e8-4eb2-acb8-556581579fdc,Namespace:kube-system,Attempt:1,} returns sandbox id \"9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc\"" Jan 15 13:52:02.720571 containerd[1506]: time="2025-01-15T13:52:02.720523152Z" level=info msg="CreateContainer within sandbox \"9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 13:52:02.755589 containerd[1506]: time="2025-01-15T13:52:02.755257075Z" level=info msg="StartContainer for \"3950c335f9b10511a353e1291f68062ec9fdc98762346120e6500952f42ded70\" returns successfully" Jan 15 13:52:02.782341 containerd[1506]: time="2025-01-15T13:52:02.782081547Z" level=info msg="CreateContainer within sandbox \"9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6896d846c7294e3b7b79c7e7d12a463b5d9d8fe0d30887801d1c3144b52e9b4\"" Jan 15 13:52:02.783951 containerd[1506]: time="2025-01-15T13:52:02.783910177Z" level=info msg="StartContainer for \"e6896d846c7294e3b7b79c7e7d12a463b5d9d8fe0d30887801d1c3144b52e9b4\"" Jan 15 13:52:02.839802 systemd[1]: run-netns-cni\x2d8c8914bc\x2d6682\x2da1a9\x2d0da6\x2d4dfb2617507f.mount: Deactivated successfully. Jan 15 13:52:02.878633 systemd[1]: Started cri-containerd-e6896d846c7294e3b7b79c7e7d12a463b5d9d8fe0d30887801d1c3144b52e9b4.scope - libcontainer container e6896d846c7294e3b7b79c7e7d12a463b5d9d8fe0d30887801d1c3144b52e9b4. Jan 15 13:52:02.925416 systemd-networkd[1432]: cali6a2440da13a: Link UP Jan 15 13:52:02.931068 systemd-networkd[1432]: cali6a2440da13a: Gained carrier Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.674 [INFO][4552] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0 calico-kube-controllers-7599984bdf- calico-system e98d2fb8-4dd9-4f33-9606-af6d41a36b1e 794 0 2025-01-15 13:51:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7599984bdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-e1jz5.gb1.brightbox.com calico-kube-controllers-7599984bdf-btpzm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6a2440da13a [] []}} ContainerID="cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" Namespace="calico-system" Pod="calico-kube-controllers-7599984bdf-btpzm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.675 [INFO][4552] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" Namespace="calico-system" Pod="calico-kube-controllers-7599984bdf-btpzm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.789 [INFO][4604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" HandleID="k8s-pod-network.cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.820 [INFO][4604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" HandleID="k8s-pod-network.cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050760), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-e1jz5.gb1.brightbox.com", "pod":"calico-kube-controllers-7599984bdf-btpzm", "timestamp":"2025-01-15 13:52:02.789652437 +0000 UTC"}, Hostname:"srv-e1jz5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.820 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.821 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.821 [INFO][4604] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-e1jz5.gb1.brightbox.com' Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.831 [INFO][4604] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.850 [INFO][4604] ipam/ipam.go 372: Looking up existing affinities for host host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.860 [INFO][4604] ipam/ipam.go 489: Trying affinity for 192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.868 [INFO][4604] ipam/ipam.go 155: Attempting to load block cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.878 [INFO][4604] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.878 [INFO][4604] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.883 [INFO][4604] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.896 [INFO][4604] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.910 [INFO][4604] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.23.198/26] block=192.168.23.192/26 handle="k8s-pod-network.cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.910 [INFO][4604] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.198/26] handle="k8s-pod-network.cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" host="srv-e1jz5.gb1.brightbox.com" Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.910 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:02.977152 containerd[1506]: 2025-01-15 13:52:02.910 [INFO][4604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.198/26] IPv6=[] ContainerID="cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" HandleID="k8s-pod-network.cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:02.979877 containerd[1506]: 2025-01-15 13:52:02.912 [INFO][4552] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" Namespace="calico-system" Pod="calico-kube-controllers-7599984bdf-btpzm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0", GenerateName:"calico-kube-controllers-7599984bdf-", Namespace:"calico-system", SelfLink:"", UID:"e98d2fb8-4dd9-4f33-9606-af6d41a36b1e", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7599984bdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-7599984bdf-btpzm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a2440da13a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:02.979877 containerd[1506]: 2025-01-15 13:52:02.913 [INFO][4552] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.23.198/32] ContainerID="cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" Namespace="calico-system" Pod="calico-kube-controllers-7599984bdf-btpzm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:02.979877 containerd[1506]: 2025-01-15 13:52:02.913 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a2440da13a ContainerID="cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" Namespace="calico-system" Pod="calico-kube-controllers-7599984bdf-btpzm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:02.979877 containerd[1506]: 2025-01-15 13:52:02.930 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" Namespace="calico-system" Pod="calico-kube-controllers-7599984bdf-btpzm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:02.979877 containerd[1506]: 2025-01-15 13:52:02.932 [INFO][4552] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" Namespace="calico-system" Pod="calico-kube-controllers-7599984bdf-btpzm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0", GenerateName:"calico-kube-controllers-7599984bdf-", Namespace:"calico-system", SelfLink:"", UID:"e98d2fb8-4dd9-4f33-9606-af6d41a36b1e", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7599984bdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae", Pod:"calico-kube-controllers-7599984bdf-btpzm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a2440da13a", MAC:"96:ab:4a:63:da:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:02.979877 containerd[1506]: 2025-01-15 13:52:02.968 [INFO][4552] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae" Namespace="calico-system" Pod="calico-kube-controllers-7599984bdf-btpzm" WorkloadEndpoint="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:02.999994 kubelet[2737]: I0115 13:52:02.999829 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kgv74" podStartSLOduration=38.99910594 podStartE2EDuration="38.99910594s" podCreationTimestamp="2025-01-15 13:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:52:02.998272931 +0000 UTC m=+51.805721301" watchObservedRunningTime="2025-01-15 13:52:02.99910594 +0000 UTC m=+51.806554314" Jan 15 13:52:03.002812 systemd-networkd[1432]: caliaa9cf520e22: Gained IPv6LL Jan 15 13:52:03.010341 containerd[1506]: time="2025-01-15T13:52:03.009590461Z" level=info msg="StartContainer for \"e6896d846c7294e3b7b79c7e7d12a463b5d9d8fe0d30887801d1c3144b52e9b4\" returns successfully" Jan 15 13:52:03.041132 containerd[1506]: time="2025-01-15T13:52:03.040829646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:52:03.041466 containerd[1506]: time="2025-01-15T13:52:03.041411064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:52:03.041694 containerd[1506]: time="2025-01-15T13:52:03.041641648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:03.042139 containerd[1506]: time="2025-01-15T13:52:03.042062507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:52:03.089880 systemd[1]: Started cri-containerd-cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae.scope - libcontainer container cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae. Jan 15 13:52:03.316056 containerd[1506]: time="2025-01-15T13:52:03.315880921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7599984bdf-btpzm,Uid:e98d2fb8-4dd9-4f33-9606-af6d41a36b1e,Namespace:calico-system,Attempt:1,} returns sandbox id \"cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae\"" Jan 15 13:52:03.512737 systemd-networkd[1432]: cali829a485b001: Gained IPv6LL Jan 15 13:52:03.832544 systemd-networkd[1432]: calia76b48332ae: Gained IPv6LL Jan 15 13:52:04.003727 kubelet[2737]: I0115 13:52:04.003676 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-6kvqp" podStartSLOduration=40.003624784 podStartE2EDuration="40.003624784s" podCreationTimestamp="2025-01-15 13:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:52:03.997038132 +0000 UTC m=+52.804486517" watchObservedRunningTime="2025-01-15 13:52:04.003624784 +0000 UTC m=+52.811073148" Jan 15 13:52:04.280663 systemd-networkd[1432]: cali6a2440da13a: Gained IPv6LL Jan 15 13:52:05.198923 containerd[1506]: time="2025-01-15T13:52:05.198740180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:05.200244 containerd[1506]: time="2025-01-15T13:52:05.199745386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 15 13:52:05.201191 containerd[1506]: time="2025-01-15T13:52:05.201118202Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:05.205450 containerd[1506]: time="2025-01-15T13:52:05.205238170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:05.207768 containerd[1506]: time="2025-01-15T13:52:05.207587711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.041309032s" Jan 15 13:52:05.207768 containerd[1506]: time="2025-01-15T13:52:05.207636421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 15 13:52:05.208945 containerd[1506]: time="2025-01-15T13:52:05.208887435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 15 13:52:05.212334 containerd[1506]: time="2025-01-15T13:52:05.212266653Z" level=info msg="CreateContainer within sandbox \"961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 13:52:05.228650 containerd[1506]: time="2025-01-15T13:52:05.228593119Z" level=info msg="CreateContainer within sandbox \"961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3b55debe2fb0304363d02ee2ef86d8c15e70a65dd3eb90a6cfb0e093267b3b08\"" Jan 15 13:52:05.231484 containerd[1506]: time="2025-01-15T13:52:05.230777301Z" level=info msg="StartContainer for \"3b55debe2fb0304363d02ee2ef86d8c15e70a65dd3eb90a6cfb0e093267b3b08\"" Jan 15 13:52:05.277515 systemd[1]: Started cri-containerd-3b55debe2fb0304363d02ee2ef86d8c15e70a65dd3eb90a6cfb0e093267b3b08.scope - libcontainer container 3b55debe2fb0304363d02ee2ef86d8c15e70a65dd3eb90a6cfb0e093267b3b08. Jan 15 13:52:05.342321 containerd[1506]: time="2025-01-15T13:52:05.342161780Z" level=info msg="StartContainer for \"3b55debe2fb0304363d02ee2ef86d8c15e70a65dd3eb90a6cfb0e093267b3b08\" returns successfully" Jan 15 13:52:06.990585 kubelet[2737]: I0115 13:52:06.990513 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 13:52:07.050507 containerd[1506]: time="2025-01-15T13:52:07.050293691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:07.051691 containerd[1506]: time="2025-01-15T13:52:07.051636065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 15 13:52:07.052835 containerd[1506]: time="2025-01-15T13:52:07.052552621Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:07.055932 containerd[1506]: time="2025-01-15T13:52:07.055843749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:07.057341 containerd[1506]: time="2025-01-15T13:52:07.057084815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.848136096s" Jan 15 13:52:07.057341 containerd[1506]: time="2025-01-15T13:52:07.057139816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 15 13:52:07.059227 containerd[1506]: time="2025-01-15T13:52:07.058639412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 13:52:07.062398 containerd[1506]: time="2025-01-15T13:52:07.061955843Z" level=info msg="CreateContainer within sandbox \"92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 15 13:52:07.085211 containerd[1506]: time="2025-01-15T13:52:07.085080673Z" level=info msg="CreateContainer within sandbox \"92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8c0ad4877d2237881ec5f6d63a244e32ade4f02ac6f0c3ac8007c4f73414a8b1\"" Jan 15 13:52:07.086133 containerd[1506]: time="2025-01-15T13:52:07.086090710Z" level=info msg="StartContainer for \"8c0ad4877d2237881ec5f6d63a244e32ade4f02ac6f0c3ac8007c4f73414a8b1\"" Jan 15 13:52:07.146775 systemd[1]: run-containerd-runc-k8s.io-8c0ad4877d2237881ec5f6d63a244e32ade4f02ac6f0c3ac8007c4f73414a8b1-runc.9REvGF.mount: Deactivated successfully. Jan 15 13:52:07.159576 systemd[1]: Started cri-containerd-8c0ad4877d2237881ec5f6d63a244e32ade4f02ac6f0c3ac8007c4f73414a8b1.scope - libcontainer container 8c0ad4877d2237881ec5f6d63a244e32ade4f02ac6f0c3ac8007c4f73414a8b1. Jan 15 13:52:07.214805 containerd[1506]: time="2025-01-15T13:52:07.214671182Z" level=info msg="StartContainer for \"8c0ad4877d2237881ec5f6d63a244e32ade4f02ac6f0c3ac8007c4f73414a8b1\" returns successfully" Jan 15 13:52:07.612178 containerd[1506]: time="2025-01-15T13:52:07.612095572Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:07.614868 containerd[1506]: time="2025-01-15T13:52:07.614107012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 15 13:52:07.625340 containerd[1506]: time="2025-01-15T13:52:07.622670467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 563.984792ms" Jan 15 13:52:07.625340 containerd[1506]: time="2025-01-15T13:52:07.622749605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 15 13:52:07.627630 containerd[1506]: time="2025-01-15T13:52:07.627597937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 15 13:52:07.636214 containerd[1506]: time="2025-01-15T13:52:07.636147528Z" level=info msg="CreateContainer within sandbox \"a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 13:52:07.729421 containerd[1506]: time="2025-01-15T13:52:07.729242859Z" level=info msg="CreateContainer within sandbox \"a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f9b5edbdae3a31b9226c977ffbf02eedfa93c4e1cc6d15b71324c41ac8824243\"" Jan 15 13:52:07.733687 containerd[1506]: time="2025-01-15T13:52:07.733637400Z" level=info msg="StartContainer for \"f9b5edbdae3a31b9226c977ffbf02eedfa93c4e1cc6d15b71324c41ac8824243\"" Jan 15 13:52:07.813557 systemd[1]: Started cri-containerd-f9b5edbdae3a31b9226c977ffbf02eedfa93c4e1cc6d15b71324c41ac8824243.scope - libcontainer container f9b5edbdae3a31b9226c977ffbf02eedfa93c4e1cc6d15b71324c41ac8824243. Jan 15 13:52:07.903734 containerd[1506]: time="2025-01-15T13:52:07.903601403Z" level=info msg="StartContainer for \"f9b5edbdae3a31b9226c977ffbf02eedfa93c4e1cc6d15b71324c41ac8824243\" returns successfully" Jan 15 13:52:08.038170 kubelet[2737]: I0115 13:52:08.038100 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84b56f5647-rqr5k" podStartSLOduration=32.045432054 podStartE2EDuration="37.038023288s" podCreationTimestamp="2025-01-15 13:51:31 +0000 UTC" firstStartedPulling="2025-01-15 13:52:02.633717081 +0000 UTC m=+51.441165438" lastFinishedPulling="2025-01-15 13:52:07.626308296 +0000 UTC m=+56.433756672" observedRunningTime="2025-01-15 13:52:08.036351478 +0000 UTC m=+56.843799841" watchObservedRunningTime="2025-01-15 13:52:08.038023288 +0000 UTC m=+56.845471663" Jan 15 13:52:08.039655 kubelet[2737]: I0115 13:52:08.039098 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84b56f5647-2r2cz" podStartSLOduration=32.990910783 podStartE2EDuration="37.039043901s" podCreationTimestamp="2025-01-15 13:51:31 +0000 UTC" firstStartedPulling="2025-01-15 13:52:01.160203797 +0000 UTC m=+49.967652155" lastFinishedPulling="2025-01-15 13:52:05.208336897 +0000 UTC m=+54.015785273" observedRunningTime="2025-01-15 13:52:06.015632761 +0000 UTC m=+54.823081130" watchObservedRunningTime="2025-01-15 13:52:08.039043901 +0000 UTC m=+56.846492280" Jan 15 13:52:09.011580 kubelet[2737]: I0115 13:52:09.011472 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 13:52:11.642925 containerd[1506]: time="2025-01-15T13:52:11.640715819Z" level=info msg="StopPodSandbox for \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\"" Jan 15 13:52:11.722251 containerd[1506]: time="2025-01-15T13:52:11.721860075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:11.724068 containerd[1506]: time="2025-01-15T13:52:11.723211096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 15 13:52:11.727258 containerd[1506]: time="2025-01-15T13:52:11.725923235Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:11.744037 containerd[1506]: time="2025-01-15T13:52:11.741798240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.114054344s" Jan 15 13:52:11.744735 containerd[1506]: time="2025-01-15T13:52:11.744678137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 15 13:52:11.744897 containerd[1506]: time="2025-01-15T13:52:11.742179337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:11.746212 containerd[1506]: time="2025-01-15T13:52:11.746168659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 15 13:52:11.814588 containerd[1506]: time="2025-01-15T13:52:11.814090974Z" level=info msg="CreateContainer within sandbox \"cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 15 13:52:11.847976 containerd[1506]: time="2025-01-15T13:52:11.847814303Z" level=info msg="CreateContainer within sandbox \"cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"678b7cabbe43777178dd0069fbd6d1feb3d1666213ec2923dbc0d6ef4beda6d3\"" Jan 15 13:52:11.854665 containerd[1506]: time="2025-01-15T13:52:11.852734165Z" level=info msg="StartContainer for \"678b7cabbe43777178dd0069fbd6d1feb3d1666213ec2923dbc0d6ef4beda6d3\"" Jan 15 13:52:11.976544 systemd[1]: Started cri-containerd-678b7cabbe43777178dd0069fbd6d1feb3d1666213ec2923dbc0d6ef4beda6d3.scope - libcontainer container 678b7cabbe43777178dd0069fbd6d1feb3d1666213ec2923dbc0d6ef4beda6d3. Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:11.887 [WARNING][4873] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4ec8923-4d40-4539-b7a9-8d3c151dc6d9", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648", Pod:"csi-node-driver-n4spm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c81b2cc52f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:11.891 [INFO][4873] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:11.891 [INFO][4873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" iface="eth0" netns="" Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:11.891 [INFO][4873] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:11.891 [INFO][4873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:12.004 [INFO][4892] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" HandleID="k8s-pod-network.64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:12.005 [INFO][4892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:12.005 [INFO][4892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:12.031 [WARNING][4892] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" HandleID="k8s-pod-network.64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:12.033 [INFO][4892] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" HandleID="k8s-pod-network.64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:12.040 [INFO][4892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:12.063035 containerd[1506]: 2025-01-15 13:52:12.056 [INFO][4873] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:52:12.063035 containerd[1506]: time="2025-01-15T13:52:12.061565164Z" level=info msg="TearDown network for sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\" successfully" Jan 15 13:52:12.063035 containerd[1506]: time="2025-01-15T13:52:12.061604056Z" level=info msg="StopPodSandbox for \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\" returns successfully" Jan 15 13:52:12.099887 containerd[1506]: time="2025-01-15T13:52:12.099773461Z" level=info msg="RemovePodSandbox for \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\"" Jan 15 13:52:12.099887 containerd[1506]: time="2025-01-15T13:52:12.099885441Z" level=info msg="Forcibly stopping sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\"" Jan 15 13:52:12.161119 containerd[1506]: time="2025-01-15T13:52:12.160972530Z" level=info msg="StartContainer for \"678b7cabbe43777178dd0069fbd6d1feb3d1666213ec2923dbc0d6ef4beda6d3\" returns successfully" Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.286 [WARNING][4929] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4ec8923-4d40-4539-b7a9-8d3c151dc6d9", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648", Pod:"csi-node-driver-n4spm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c81b2cc52f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.286 [INFO][4929] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.286 [INFO][4929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" iface="eth0" netns="" Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.286 [INFO][4929] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.286 [INFO][4929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.358 [INFO][4951] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" HandleID="k8s-pod-network.64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.359 [INFO][4951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.359 [INFO][4951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.368 [WARNING][4951] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" HandleID="k8s-pod-network.64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.368 [INFO][4951] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" HandleID="k8s-pod-network.64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Workload="srv--e1jz5.gb1.brightbox.com-k8s-csi--node--driver--n4spm-eth0" Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.373 [INFO][4951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:12.381653 containerd[1506]: 2025-01-15 13:52:12.376 [INFO][4929] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43" Jan 15 13:52:12.381653 containerd[1506]: time="2025-01-15T13:52:12.381505548Z" level=info msg="TearDown network for sandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\" successfully" Jan 15 13:52:12.398185 containerd[1506]: time="2025-01-15T13:52:12.396420498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:52:12.438461 containerd[1506]: time="2025-01-15T13:52:12.438372163Z" level=info msg="RemovePodSandbox \"64c87c456dc2a819cded1b260f385088a1824fda4ca6c09150e19386f065eb43\" returns successfully" Jan 15 13:52:12.440164 containerd[1506]: time="2025-01-15T13:52:12.439988614Z" level=info msg="StopPodSandbox for \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\"" Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.545 [WARNING][4969] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0", GenerateName:"calico-apiserver-84b56f5647-", Namespace:"calico-apiserver", SelfLink:"", UID:"630e22a6-6c6e-45a9-91a0-71d433560181", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b56f5647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3", Pod:"calico-apiserver-84b56f5647-rqr5k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa9cf520e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.546 [INFO][4969] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.546 [INFO][4969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" iface="eth0" netns="" Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.546 [INFO][4969] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.546 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.615 [INFO][4975] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" HandleID="k8s-pod-network.e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.616 [INFO][4975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.616 [INFO][4975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.625 [WARNING][4975] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" HandleID="k8s-pod-network.e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.626 [INFO][4975] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" HandleID="k8s-pod-network.e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.629 [INFO][4975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:12.635091 containerd[1506]: 2025-01-15 13:52:12.631 [INFO][4969] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:12.635091 containerd[1506]: time="2025-01-15T13:52:12.634271638Z" level=info msg="TearDown network for sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\" successfully" Jan 15 13:52:12.635091 containerd[1506]: time="2025-01-15T13:52:12.634380153Z" level=info msg="StopPodSandbox for \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\" returns successfully" Jan 15 13:52:12.639608 containerd[1506]: time="2025-01-15T13:52:12.635099024Z" level=info msg="RemovePodSandbox for \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\"" Jan 15 13:52:12.639608 containerd[1506]: time="2025-01-15T13:52:12.635136057Z" level=info msg="Forcibly stopping sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\"" Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.743 [WARNING][4993] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0", GenerateName:"calico-apiserver-84b56f5647-", Namespace:"calico-apiserver", SelfLink:"", UID:"630e22a6-6c6e-45a9-91a0-71d433560181", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b56f5647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"a9de9eaeb82d12ee62ede9be96ad4193decfd600a6ae896f8dc4a48a72b7f9d3", Pod:"calico-apiserver-84b56f5647-rqr5k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa9cf520e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.743 [INFO][4993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.744 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" iface="eth0" netns="" Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.744 [INFO][4993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.744 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.845 [INFO][4999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" HandleID="k8s-pod-network.e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.845 [INFO][4999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.845 [INFO][4999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.861 [WARNING][4999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" HandleID="k8s-pod-network.e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.861 [INFO][4999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" HandleID="k8s-pod-network.e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--rqr5k-eth0" Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.863 [INFO][4999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:12.867853 containerd[1506]: 2025-01-15 13:52:12.865 [INFO][4993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af" Jan 15 13:52:12.867853 containerd[1506]: time="2025-01-15T13:52:12.867811204Z" level=info msg="TearDown network for sandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\" successfully" Jan 15 13:52:12.871346 containerd[1506]: time="2025-01-15T13:52:12.871260538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:52:12.875680 containerd[1506]: time="2025-01-15T13:52:12.875632125Z" level=info msg="RemovePodSandbox \"e0a4619dd93df428aeb23debac4d95a3a52524edcfa858f0334a33c6094d18af\" returns successfully" Jan 15 13:52:12.876369 containerd[1506]: time="2025-01-15T13:52:12.876314544Z" level=info msg="StopPodSandbox for \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\"" Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:12.940 [WARNING][5017] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0", GenerateName:"calico-kube-controllers-7599984bdf-", Namespace:"calico-system", SelfLink:"", UID:"e98d2fb8-4dd9-4f33-9606-af6d41a36b1e", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7599984bdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae", Pod:"calico-kube-controllers-7599984bdf-btpzm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a2440da13a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:12.941 [INFO][5017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:12.941 [INFO][5017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" iface="eth0" netns="" Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:12.941 [INFO][5017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:12.941 [INFO][5017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:12.989 [INFO][5023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" HandleID="k8s-pod-network.5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:12.990 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:12.990 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:12.998 [WARNING][5023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" HandleID="k8s-pod-network.5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:12.999 [INFO][5023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" HandleID="k8s-pod-network.5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:13.002 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:13.010002 containerd[1506]: 2025-01-15 13:52:13.005 [INFO][5017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:13.010002 containerd[1506]: time="2025-01-15T13:52:13.008813330Z" level=info msg="TearDown network for sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\" successfully" Jan 15 13:52:13.010002 containerd[1506]: time="2025-01-15T13:52:13.008859134Z" level=info msg="StopPodSandbox for \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\" returns successfully" Jan 15 13:52:13.014540 containerd[1506]: time="2025-01-15T13:52:13.010491889Z" level=info msg="RemovePodSandbox for \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\"" Jan 15 13:52:13.014540 containerd[1506]: time="2025-01-15T13:52:13.010537572Z" level=info msg="Forcibly stopping sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\"" Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.116 [WARNING][5041] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0", GenerateName:"calico-kube-controllers-7599984bdf-", Namespace:"calico-system", SelfLink:"", UID:"e98d2fb8-4dd9-4f33-9606-af6d41a36b1e", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7599984bdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"cfb5fb77714fe0601e0339f880b2049e6969c4ef2c2b1b759d2b3bb9374f27ae", Pod:"calico-kube-controllers-7599984bdf-btpzm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a2440da13a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.116 [INFO][5041] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.116 [INFO][5041] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" iface="eth0" netns="" Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.116 [INFO][5041] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.116 [INFO][5041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.213 [INFO][5053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" HandleID="k8s-pod-network.5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.215 [INFO][5053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.217 [INFO][5053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.229 [WARNING][5053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" HandleID="k8s-pod-network.5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.229 [INFO][5053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" HandleID="k8s-pod-network.5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--kube--controllers--7599984bdf--btpzm-eth0" Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.232 [INFO][5053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:13.238013 containerd[1506]: 2025-01-15 13:52:13.235 [INFO][5041] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141" Jan 15 13:52:13.240798 containerd[1506]: time="2025-01-15T13:52:13.238064765Z" level=info msg="TearDown network for sandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\" successfully" Jan 15 13:52:13.242671 containerd[1506]: time="2025-01-15T13:52:13.242615230Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:52:13.242762 containerd[1506]: time="2025-01-15T13:52:13.242697137Z" level=info msg="RemovePodSandbox \"5d20ca11d9ec6718d0cdba52d53aa0300f25565be8b7462dff4a9b16bf78c141\" returns successfully" Jan 15 13:52:13.244006 containerd[1506]: time="2025-01-15T13:52:13.243954325Z" level=info msg="StopPodSandbox for \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\"" Jan 15 13:52:13.259083 kubelet[2737]: I0115 13:52:13.259035 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7599984bdf-btpzm" podStartSLOduration=32.833337984 podStartE2EDuration="41.25868085s" podCreationTimestamp="2025-01-15 13:51:32 +0000 UTC" firstStartedPulling="2025-01-15 13:52:03.320254142 +0000 UTC m=+52.127702503" lastFinishedPulling="2025-01-15 13:52:11.745596997 +0000 UTC m=+60.553045369" observedRunningTime="2025-01-15 13:52:13.099958008 +0000 UTC m=+61.907406377" watchObservedRunningTime="2025-01-15 13:52:13.25868085 +0000 UTC m=+62.066129220" Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.335 [WARNING][5084] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"82e5b51c-288b-4a98-8331-b0a7cd6134e0", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7", Pod:"coredns-76f75df574-kgv74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali829a485b001", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.337 [INFO][5084] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.337 [INFO][5084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" iface="eth0" netns="" Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.337 [INFO][5084] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.338 [INFO][5084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.417 [INFO][5090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" HandleID="k8s-pod-network.cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.417 [INFO][5090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.417 [INFO][5090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.435 [WARNING][5090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" HandleID="k8s-pod-network.cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.435 [INFO][5090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" HandleID="k8s-pod-network.cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.443 [INFO][5090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:13.449118 containerd[1506]: 2025-01-15 13:52:13.446 [INFO][5084] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:13.451029 containerd[1506]: time="2025-01-15T13:52:13.449130969Z" level=info msg="TearDown network for sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\" successfully" Jan 15 13:52:13.451029 containerd[1506]: time="2025-01-15T13:52:13.449162029Z" level=info msg="StopPodSandbox for \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\" returns successfully" Jan 15 13:52:13.451029 containerd[1506]: time="2025-01-15T13:52:13.449799915Z" level=info msg="RemovePodSandbox for \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\"" Jan 15 13:52:13.451029 containerd[1506]: time="2025-01-15T13:52:13.449847289Z" level=info msg="Forcibly stopping sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\"" Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.533 [WARNING][5109] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"82e5b51c-288b-4a98-8331-b0a7cd6134e0", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"a8a9da39d6f59fe7f5b3079e0b5cf96c138f11d6f8fea416eb4d448dc14405b7", Pod:"coredns-76f75df574-kgv74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali829a485b001", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.533 [INFO][5109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.533 [INFO][5109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" iface="eth0" netns="" Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.533 [INFO][5109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.533 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.577 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" HandleID="k8s-pod-network.cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.577 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.577 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.591 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" HandleID="k8s-pod-network.cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.592 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" HandleID="k8s-pod-network.cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--kgv74-eth0" Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.594 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:13.600360 containerd[1506]: 2025-01-15 13:52:13.596 [INFO][5109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72" Jan 15 13:52:13.600360 containerd[1506]: time="2025-01-15T13:52:13.599487977Z" level=info msg="TearDown network for sandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\" successfully" Jan 15 13:52:13.604346 containerd[1506]: time="2025-01-15T13:52:13.603805422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:52:13.604346 containerd[1506]: time="2025-01-15T13:52:13.603951818Z" level=info msg="RemovePodSandbox \"cf2e7bd6e7b1e984246af2265dd3d779d01e01f8c7831a025e865b6dbb39ad72\" returns successfully" Jan 15 13:52:13.618287 containerd[1506]: time="2025-01-15T13:52:13.618240754Z" level=info msg="StopPodSandbox for \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\"" Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.697 [WARNING][5134] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0", GenerateName:"calico-apiserver-84b56f5647-", Namespace:"calico-apiserver", SelfLink:"", UID:"d776d0fd-32e5-42e4-a525-94155ed739e3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b56f5647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9", Pod:"calico-apiserver-84b56f5647-2r2cz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic33739a91ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.697 [INFO][5134] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.697 [INFO][5134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" iface="eth0" netns="" Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.698 [INFO][5134] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.698 [INFO][5134] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.740 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" HandleID="k8s-pod-network.b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.740 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.740 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.762 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" HandleID="k8s-pod-network.b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.762 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" HandleID="k8s-pod-network.b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.766 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:13.788854 containerd[1506]: 2025-01-15 13:52:13.784 [INFO][5134] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:52:13.788854 containerd[1506]: time="2025-01-15T13:52:13.788148567Z" level=info msg="TearDown network for sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\" successfully" Jan 15 13:52:13.788854 containerd[1506]: time="2025-01-15T13:52:13.788195095Z" level=info msg="StopPodSandbox for \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\" returns successfully" Jan 15 13:52:13.820952 containerd[1506]: time="2025-01-15T13:52:13.789211779Z" level=info msg="RemovePodSandbox for \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\"" Jan 15 13:52:13.820952 containerd[1506]: time="2025-01-15T13:52:13.789251322Z" level=info msg="Forcibly stopping sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\"" Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.904 [WARNING][5158] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0", GenerateName:"calico-apiserver-84b56f5647-", Namespace:"calico-apiserver", SelfLink:"", UID:"d776d0fd-32e5-42e4-a525-94155ed739e3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b56f5647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"961e3e33a9865f6ab81df0975f6ae44194732b7faf3d8240b3c7c00df08e5fe9", Pod:"calico-apiserver-84b56f5647-2r2cz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic33739a91ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.904 [INFO][5158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.904 [INFO][5158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" iface="eth0" netns="" Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.904 [INFO][5158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.904 [INFO][5158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.971 [INFO][5168] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" HandleID="k8s-pod-network.b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.972 [INFO][5168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.972 [INFO][5168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.980 [WARNING][5168] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" HandleID="k8s-pod-network.b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.980 [INFO][5168] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" HandleID="k8s-pod-network.b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Workload="srv--e1jz5.gb1.brightbox.com-k8s-calico--apiserver--84b56f5647--2r2cz-eth0" Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.982 [INFO][5168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:13.987338 containerd[1506]: 2025-01-15 13:52:13.984 [INFO][5158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7" Jan 15 13:52:13.987338 containerd[1506]: time="2025-01-15T13:52:13.987156022Z" level=info msg="TearDown network for sandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\" successfully" Jan 15 13:52:13.995639 containerd[1506]: time="2025-01-15T13:52:13.995374343Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:52:13.995639 containerd[1506]: time="2025-01-15T13:52:13.995457785Z" level=info msg="RemovePodSandbox \"b18498034d385464115c0ca1d02564d65c4ce98b71574b79d80779db1d4945e7\" returns successfully" Jan 15 13:52:13.997581 containerd[1506]: time="2025-01-15T13:52:13.997041878Z" level=info msg="StopPodSandbox for \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\"" Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.102 [WARNING][5185] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"016d5cbd-c8e8-4eb2-acb8-556581579fdc", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc", Pod:"coredns-76f75df574-6kvqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia76b48332ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.103 [INFO][5185] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.103 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" iface="eth0" netns="" Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.103 [INFO][5185] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.103 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.161 [INFO][5192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" HandleID="k8s-pod-network.64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.162 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.162 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.175 [WARNING][5192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" HandleID="k8s-pod-network.64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.175 [INFO][5192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" HandleID="k8s-pod-network.64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.179 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:14.185193 containerd[1506]: 2025-01-15 13:52:14.182 [INFO][5185] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:14.188649 containerd[1506]: time="2025-01-15T13:52:14.185240240Z" level=info msg="TearDown network for sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\" successfully" Jan 15 13:52:14.188649 containerd[1506]: time="2025-01-15T13:52:14.185290709Z" level=info msg="StopPodSandbox for \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\" returns successfully" Jan 15 13:52:14.188649 containerd[1506]: time="2025-01-15T13:52:14.187530693Z" level=info msg="RemovePodSandbox for \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\"" Jan 15 13:52:14.188649 containerd[1506]: time="2025-01-15T13:52:14.187569760Z" level=info msg="Forcibly stopping sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\"" Jan 15 13:52:14.360638 containerd[1506]: time="2025-01-15T13:52:14.360582854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:14.364989 containerd[1506]: time="2025-01-15T13:52:14.364337176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 15 13:52:14.368507 containerd[1506]: time="2025-01-15T13:52:14.368465993Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:14.377636 containerd[1506]: time="2025-01-15T13:52:14.377369083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:52:14.383006 containerd[1506]: time="2025-01-15T13:52:14.382968884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.63671258s" Jan 15 13:52:14.383358 containerd[1506]: time="2025-01-15T13:52:14.383325783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 15 13:52:14.388719 containerd[1506]: time="2025-01-15T13:52:14.388673575Z" level=info msg="CreateContainer within sandbox \"92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.321 [WARNING][5211] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"016d5cbd-c8e8-4eb2-acb8-556581579fdc", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-e1jz5.gb1.brightbox.com", ContainerID:"9d6a8046ba17eef8079f17133ec2d86c8543919e5d68ab984653ecf6b70da7cc", Pod:"coredns-76f75df574-6kvqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia76b48332ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.321 [INFO][5211] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.321 [INFO][5211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" iface="eth0" netns="" Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.321 [INFO][5211] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.321 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.378 [INFO][5217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" HandleID="k8s-pod-network.64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.379 [INFO][5217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.379 [INFO][5217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.394 [WARNING][5217] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" HandleID="k8s-pod-network.64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.394 [INFO][5217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" HandleID="k8s-pod-network.64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Workload="srv--e1jz5.gb1.brightbox.com-k8s-coredns--76f75df574--6kvqp-eth0" Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.400 [INFO][5217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:52:14.406348 containerd[1506]: 2025-01-15 13:52:14.401 [INFO][5211] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a" Jan 15 13:52:14.406348 containerd[1506]: time="2025-01-15T13:52:14.406371475Z" level=info msg="TearDown network for sandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\" successfully" Jan 15 13:52:14.412716 containerd[1506]: time="2025-01-15T13:52:14.412481669Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:52:14.412716 containerd[1506]: time="2025-01-15T13:52:14.412563726Z" level=info msg="RemovePodSandbox \"64b51a633cdda4a6bddae0123c3b0d8c4c11939a56b78171751519a76abd437a\" returns successfully" Jan 15 13:52:14.421846 containerd[1506]: time="2025-01-15T13:52:14.421781598Z" level=info msg="CreateContainer within sandbox \"92996ac013b6ecf63419694dc78aa68f8600e9ef4849c617a71f875b8846e648\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b15ed91c0d1ca1c48287dffe52c652d9613e27ff91eb87bc1f85fe93e21607ab\"" Jan 15 13:52:14.423338 containerd[1506]: time="2025-01-15T13:52:14.423046190Z" level=info msg="StartContainer for \"b15ed91c0d1ca1c48287dffe52c652d9613e27ff91eb87bc1f85fe93e21607ab\"" Jan 15 13:52:14.499542 systemd[1]: Started cri-containerd-b15ed91c0d1ca1c48287dffe52c652d9613e27ff91eb87bc1f85fe93e21607ab.scope - libcontainer container b15ed91c0d1ca1c48287dffe52c652d9613e27ff91eb87bc1f85fe93e21607ab. Jan 15 13:52:14.551400 containerd[1506]: time="2025-01-15T13:52:14.550780506Z" level=info msg="StartContainer for \"b15ed91c0d1ca1c48287dffe52c652d9613e27ff91eb87bc1f85fe93e21607ab\" returns successfully" Jan 15 13:52:15.148445 kubelet[2737]: I0115 13:52:15.148317 2737 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 15 13:52:15.158603 kubelet[2737]: I0115 13:52:15.158275 2737 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 15 13:52:15.417785 kubelet[2737]: I0115 13:52:15.416700 2737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-n4spm" podStartSLOduration=31.295313762 podStartE2EDuration="44.403498987s" podCreationTimestamp="2025-01-15 13:51:31 +0000 UTC" firstStartedPulling="2025-01-15 13:52:01.275805618 +0000 UTC m=+50.083253981" lastFinishedPulling="2025-01-15 13:52:14.383990839 +0000 UTC m=+63.191439206" observedRunningTime="2025-01-15 13:52:15.399394261 +0000 UTC m=+64.206842654" watchObservedRunningTime="2025-01-15 13:52:15.403498987 +0000 UTC m=+64.210947357" Jan 15 13:52:26.553822 systemd[1]: Started sshd@10-10.230.9.202:22-147.75.109.163:52150.service - OpenSSH per-connection server daemon (147.75.109.163:52150). Jan 15 13:52:27.496370 sshd[5296]: Accepted publickey for core from 147.75.109.163 port 52150 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:52:27.498787 sshd[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:52:27.510394 systemd-logind[1487]: New session 12 of user core. Jan 15 13:52:27.518051 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 13:52:27.729670 kubelet[2737]: I0115 13:52:27.729622 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 13:52:28.741370 sshd[5296]: pam_unix(sshd:session): session closed for user core Jan 15 13:52:28.748081 systemd[1]: sshd@10-10.230.9.202:22-147.75.109.163:52150.service: Deactivated successfully. Jan 15 13:52:28.750992 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 13:52:28.751989 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Jan 15 13:52:28.753890 systemd-logind[1487]: Removed session 12. Jan 15 13:52:30.182981 kubelet[2737]: I0115 13:52:30.182657 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 13:52:33.906707 systemd[1]: Started sshd@11-10.230.9.202:22-147.75.109.163:49200.service - OpenSSH per-connection server daemon (147.75.109.163:49200). Jan 15 13:52:34.842458 sshd[5358]: Accepted publickey for core from 147.75.109.163 port 49200 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:52:34.845111 sshd[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:52:34.852823 systemd-logind[1487]: New session 13 of user core. Jan 15 13:52:34.858573 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 13:52:35.679007 sshd[5358]: pam_unix(sshd:session): session closed for user core Jan 15 13:52:35.684457 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Jan 15 13:52:35.688111 systemd[1]: sshd@11-10.230.9.202:22-147.75.109.163:49200.service: Deactivated successfully. Jan 15 13:52:35.691744 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 13:52:35.694241 systemd-logind[1487]: Removed session 13. Jan 15 13:52:40.835778 systemd[1]: Started sshd@12-10.230.9.202:22-147.75.109.163:40860.service - OpenSSH per-connection server daemon (147.75.109.163:40860). Jan 15 13:52:41.782038 sshd[5372]: Accepted publickey for core from 147.75.109.163 port 40860 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:52:41.786644 sshd[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:52:41.797535 systemd-logind[1487]: New session 14 of user core. Jan 15 13:52:41.803621 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 13:52:42.575206 sshd[5372]: pam_unix(sshd:session): session closed for user core Jan 15 13:52:42.587252 systemd[1]: sshd@12-10.230.9.202:22-147.75.109.163:40860.service: Deactivated successfully. Jan 15 13:52:42.593363 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 13:52:42.596613 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Jan 15 13:52:42.600268 systemd-logind[1487]: Removed session 14. Jan 15 13:52:42.724723 systemd[1]: Started sshd@13-10.230.9.202:22-147.75.109.163:40870.service - OpenSSH per-connection server daemon (147.75.109.163:40870). Jan 15 13:52:43.640283 sshd[5392]: Accepted publickey for core from 147.75.109.163 port 40870 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:52:43.642599 sshd[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:52:43.650973 systemd-logind[1487]: New session 15 of user core. Jan 15 13:52:43.655553 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 13:52:44.491322 sshd[5392]: pam_unix(sshd:session): session closed for user core Jan 15 13:52:44.498281 systemd[1]: sshd@13-10.230.9.202:22-147.75.109.163:40870.service: Deactivated successfully. Jan 15 13:52:44.503139 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 13:52:44.505264 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Jan 15 13:52:44.507068 systemd-logind[1487]: Removed session 15. Jan 15 13:52:44.655656 systemd[1]: Started sshd@14-10.230.9.202:22-147.75.109.163:40878.service - OpenSSH per-connection server daemon (147.75.109.163:40878). Jan 15 13:52:45.561366 sshd[5405]: Accepted publickey for core from 147.75.109.163 port 40878 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:52:45.563491 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:52:45.570339 systemd-logind[1487]: New session 16 of user core. Jan 15 13:52:45.578534 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 13:52:46.296555 sshd[5405]: pam_unix(sshd:session): session closed for user core Jan 15 13:52:46.305765 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Jan 15 13:52:46.306638 systemd[1]: sshd@14-10.230.9.202:22-147.75.109.163:40878.service: Deactivated successfully. Jan 15 13:52:46.309917 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 13:52:46.312744 systemd-logind[1487]: Removed session 16. Jan 15 13:52:51.458809 systemd[1]: Started sshd@15-10.230.9.202:22-147.75.109.163:56492.service - OpenSSH per-connection server daemon (147.75.109.163:56492). Jan 15 13:52:52.386107 sshd[5440]: Accepted publickey for core from 147.75.109.163 port 56492 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:52:52.388255 sshd[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:52:52.394492 systemd-logind[1487]: New session 17 of user core. Jan 15 13:52:52.400549 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 13:52:53.151094 sshd[5440]: pam_unix(sshd:session): session closed for user core Jan 15 13:52:53.158735 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Jan 15 13:52:53.161597 systemd[1]: sshd@15-10.230.9.202:22-147.75.109.163:56492.service: Deactivated successfully. Jan 15 13:52:53.165198 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 13:52:53.166947 systemd-logind[1487]: Removed session 17. Jan 15 13:52:58.309737 systemd[1]: Started sshd@16-10.230.9.202:22-147.75.109.163:50058.service - OpenSSH per-connection server daemon (147.75.109.163:50058). Jan 15 13:52:59.210158 sshd[5458]: Accepted publickey for core from 147.75.109.163 port 50058 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:52:59.211057 sshd[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:52:59.219072 systemd-logind[1487]: New session 18 of user core. Jan 15 13:52:59.228542 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 13:53:00.004677 sshd[5458]: pam_unix(sshd:session): session closed for user core Jan 15 13:53:00.015215 systemd[1]: sshd@16-10.230.9.202:22-147.75.109.163:50058.service: Deactivated successfully. Jan 15 13:53:00.018295 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 13:53:00.019438 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Jan 15 13:53:00.021294 systemd-logind[1487]: Removed session 18. Jan 15 13:53:05.164727 systemd[1]: Started sshd@17-10.230.9.202:22-147.75.109.163:50074.service - OpenSSH per-connection server daemon (147.75.109.163:50074). Jan 15 13:53:06.101486 sshd[5493]: Accepted publickey for core from 147.75.109.163 port 50074 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:53:06.103462 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:53:06.110992 systemd-logind[1487]: New session 19 of user core. Jan 15 13:53:06.121551 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 13:53:06.862775 sshd[5493]: pam_unix(sshd:session): session closed for user core Jan 15 13:53:06.867208 systemd[1]: sshd@17-10.230.9.202:22-147.75.109.163:50074.service: Deactivated successfully. Jan 15 13:53:06.870283 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 13:53:06.872234 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Jan 15 13:53:06.874864 systemd-logind[1487]: Removed session 19. Jan 15 13:53:07.023726 systemd[1]: Started sshd@18-10.230.9.202:22-147.75.109.163:50078.service - OpenSSH per-connection server daemon (147.75.109.163:50078). Jan 15 13:53:07.914385 sshd[5506]: Accepted publickey for core from 147.75.109.163 port 50078 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:53:07.916866 sshd[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:53:07.924801 systemd-logind[1487]: New session 20 of user core. Jan 15 13:53:07.931555 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 13:53:08.878117 sshd[5506]: pam_unix(sshd:session): session closed for user core Jan 15 13:53:08.888288 systemd[1]: sshd@18-10.230.9.202:22-147.75.109.163:50078.service: Deactivated successfully. Jan 15 13:53:08.890906 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 13:53:08.893057 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Jan 15 13:53:08.894545 systemd-logind[1487]: Removed session 20. Jan 15 13:53:09.027650 systemd[1]: Started sshd@19-10.230.9.202:22-147.75.109.163:42344.service - OpenSSH per-connection server daemon (147.75.109.163:42344). Jan 15 13:53:09.952980 sshd[5529]: Accepted publickey for core from 147.75.109.163 port 42344 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:53:09.956614 sshd[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:53:09.965088 systemd-logind[1487]: New session 21 of user core. Jan 15 13:53:09.971523 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 13:53:13.449609 sshd[5529]: pam_unix(sshd:session): session closed for user core Jan 15 13:53:13.461414 systemd[1]: sshd@19-10.230.9.202:22-147.75.109.163:42344.service: Deactivated successfully. Jan 15 13:53:13.464894 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 13:53:13.467059 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Jan 15 13:53:13.469684 systemd-logind[1487]: Removed session 21. Jan 15 13:53:13.609903 systemd[1]: Started sshd@20-10.230.9.202:22-147.75.109.163:42352.service - OpenSSH per-connection server daemon (147.75.109.163:42352). Jan 15 13:53:14.526921 sshd[5549]: Accepted publickey for core from 147.75.109.163 port 42352 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:53:14.529528 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:53:14.536261 systemd-logind[1487]: New session 22 of user core. Jan 15 13:53:14.543569 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 13:53:16.079474 sshd[5549]: pam_unix(sshd:session): session closed for user core Jan 15 13:53:16.086446 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Jan 15 13:53:16.087796 systemd[1]: sshd@20-10.230.9.202:22-147.75.109.163:42352.service: Deactivated successfully. Jan 15 13:53:16.091258 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 13:53:16.094623 systemd-logind[1487]: Removed session 22. Jan 15 13:53:16.231675 systemd[1]: Started sshd@21-10.230.9.202:22-147.75.109.163:42356.service - OpenSSH per-connection server daemon (147.75.109.163:42356). Jan 15 13:53:16.375218 systemd[1]: run-containerd-runc-k8s.io-678b7cabbe43777178dd0069fbd6d1feb3d1666213ec2923dbc0d6ef4beda6d3-runc.LRRcOH.mount: Deactivated successfully. Jan 15 13:53:17.163921 sshd[5560]: Accepted publickey for core from 147.75.109.163 port 42356 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:53:17.166249 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:53:17.173540 systemd-logind[1487]: New session 23 of user core. Jan 15 13:53:17.180508 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 13:53:17.955677 sshd[5560]: pam_unix(sshd:session): session closed for user core Jan 15 13:53:17.961230 systemd[1]: sshd@21-10.230.9.202:22-147.75.109.163:42356.service: Deactivated successfully. Jan 15 13:53:17.964060 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 13:53:17.965545 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Jan 15 13:53:17.967259 systemd-logind[1487]: Removed session 23. Jan 15 13:53:23.112706 systemd[1]: Started sshd@22-10.230.9.202:22-147.75.109.163:36288.service - OpenSSH per-connection server daemon (147.75.109.163:36288). Jan 15 13:53:24.003735 sshd[5601]: Accepted publickey for core from 147.75.109.163 port 36288 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:53:24.006608 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:53:24.013434 systemd-logind[1487]: New session 24 of user core. Jan 15 13:53:24.018493 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 13:53:24.733933 sshd[5601]: pam_unix(sshd:session): session closed for user core Jan 15 13:53:24.739707 systemd-logind[1487]: Session 24 logged out. Waiting for processes to exit. Jan 15 13:53:24.739989 systemd[1]: sshd@22-10.230.9.202:22-147.75.109.163:36288.service: Deactivated successfully. Jan 15 13:53:24.743737 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 13:53:24.746660 systemd-logind[1487]: Removed session 24. Jan 15 13:53:29.898986 systemd[1]: Started sshd@23-10.230.9.202:22-147.75.109.163:47186.service - OpenSSH per-connection server daemon (147.75.109.163:47186). Jan 15 13:53:30.823274 sshd[5618]: Accepted publickey for core from 147.75.109.163 port 47186 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:53:30.824684 sshd[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:53:30.832903 systemd-logind[1487]: New session 25 of user core. Jan 15 13:53:30.842644 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 13:53:31.610810 sshd[5618]: pam_unix(sshd:session): session closed for user core Jan 15 13:53:31.615786 systemd-logind[1487]: Session 25 logged out. Waiting for processes to exit. Jan 15 13:53:31.616485 systemd[1]: sshd@23-10.230.9.202:22-147.75.109.163:47186.service: Deactivated successfully. Jan 15 13:53:31.620070 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 13:53:31.623613 systemd-logind[1487]: Removed session 25. Jan 15 13:53:36.773801 systemd[1]: Started sshd@24-10.230.9.202:22-147.75.109.163:47190.service - OpenSSH per-connection server daemon (147.75.109.163:47190). Jan 15 13:53:37.701977 sshd[5679]: Accepted publickey for core from 147.75.109.163 port 47190 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:53:37.705093 sshd[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:53:37.714015 systemd-logind[1487]: New session 26 of user core. Jan 15 13:53:37.720558 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 13:53:38.504638 sshd[5679]: pam_unix(sshd:session): session closed for user core Jan 15 13:53:38.510990 systemd-logind[1487]: Session 26 logged out. Waiting for processes to exit. Jan 15 13:53:38.512075 systemd[1]: sshd@24-10.230.9.202:22-147.75.109.163:47190.service: Deactivated successfully. Jan 15 13:53:38.516592 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 13:53:38.520685 systemd-logind[1487]: Removed session 26.