Jan 17 12:32:29.024933 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:32:29.024979 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:32:29.024993 kernel: BIOS-provided physical RAM map: Jan 17 12:32:29.025008 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:32:29.025018 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:32:29.025028 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:32:29.025040 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 17 12:32:29.025050 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 17 12:32:29.025061 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:32:29.025071 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 12:32:29.025081 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:32:29.025092 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:32:29.025107 kernel: NX (Execute Disable) protection: active Jan 17 12:32:29.025118 kernel: APIC: Static calls initialized Jan 17 12:32:29.025130 kernel: SMBIOS 2.8 present. Jan 17 12:32:29.025142 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 17 12:32:29.025153 kernel: Hypervisor detected: KVM Jan 17 12:32:29.025169 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:32:29.025180 kernel: kvm-clock: using sched offset of 4305011732 cycles Jan 17 12:32:29.025193 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:32:29.025206 kernel: tsc: Detected 2500.032 MHz processor Jan 17 12:32:29.025217 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:32:29.025229 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:32:29.025241 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 17 12:32:29.025252 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:32:29.025268 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:32:29.025284 kernel: Using GB pages for direct mapping Jan 17 12:32:29.025295 kernel: ACPI: Early table checksum verification disabled Jan 17 12:32:29.025307 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 17 12:32:29.025318 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:32:29.025330 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:32:29.025341 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:32:29.025353 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 17 12:32:29.025364 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:32:29.025376 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:32:29.025392 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:32:29.025404 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:32:29.025415 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 17 12:32:29.025427 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 17 12:32:29.025438 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 17 12:32:29.025455 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 17 12:32:29.025467 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 17 12:32:29.025484 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 17 12:32:29.025498 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 17 12:32:29.025510 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:32:29.025522 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:32:29.025534 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 17 12:32:29.025546 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 17 12:32:29.025558 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 17 12:32:29.025574 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 17 12:32:29.025586 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 17 12:32:29.025598 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 17 12:32:29.025610 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 17 12:32:29.025622 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 17 12:32:29.025634 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 17 12:32:29.025645 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 17 12:32:29.025657 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 17 12:32:29.027696 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 17 12:32:29.027713 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 17 12:32:29.027732 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 17 12:32:29.027745 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:32:29.027757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:32:29.027769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 17 12:32:29.027782 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 17 12:32:29.027794 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 17 12:32:29.027806 kernel: Zone ranges: Jan 17 12:32:29.027819 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:32:29.027831 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 17 12:32:29.027847 kernel: Normal empty Jan 17 12:32:29.027860 kernel: Movable zone start for each node Jan 17 12:32:29.027872 kernel: Early memory node ranges Jan 17 12:32:29.027884 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:32:29.027896 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 17 12:32:29.027919 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 17 12:32:29.027932 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:32:29.027944 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:32:29.027956 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 17 12:32:29.027968 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:32:29.027985 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:32:29.027998 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:32:29.028010 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:32:29.028022 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:32:29.028034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:32:29.028046 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:32:29.028058 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:32:29.028070 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:32:29.028082 kernel: TSC deadline timer available Jan 17 12:32:29.028098 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 17 12:32:29.028111 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:32:29.028123 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 12:32:29.028135 kernel: Booting paravirtualized kernel on KVM Jan 17 12:32:29.028147 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:32:29.028159 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 17 12:32:29.028171 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 17 12:32:29.028183 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 17 12:32:29.028195 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 17 12:32:29.028211 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:32:29.028223 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:32:29.028237 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:32:29.028250 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:32:29.028262 kernel: random: crng init done Jan 17 12:32:29.028274 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:32:29.028286 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:32:29.028298 kernel: Fallback order for Node 0: 0 Jan 17 12:32:29.028314 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 17 12:32:29.028327 kernel: Policy zone: DMA32 Jan 17 12:32:29.028339 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:32:29.028351 kernel: software IO TLB: area num 16. Jan 17 12:32:29.028363 kernel: Memory: 1901540K/2096616K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 194816K reserved, 0K cma-reserved) Jan 17 12:32:29.028375 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 17 12:32:29.028387 kernel: Kernel/User page tables isolation: enabled Jan 17 12:32:29.028399 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:32:29.028411 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:32:29.028428 kernel: Dynamic Preempt: voluntary Jan 17 12:32:29.028440 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:32:29.028465 kernel: rcu: RCU event tracing is enabled. Jan 17 12:32:29.028477 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 17 12:32:29.028490 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:32:29.028513 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:32:29.028542 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:32:29.028555 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:32:29.028567 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 17 12:32:29.028580 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 17 12:32:29.028592 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:32:29.028609 kernel: Console: colour VGA+ 80x25 Jan 17 12:32:29.028622 kernel: printk: console [tty0] enabled Jan 17 12:32:29.028635 kernel: printk: console [ttyS0] enabled Jan 17 12:32:29.028648 kernel: ACPI: Core revision 20230628 Jan 17 12:32:29.028660 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:32:29.028673 kernel: x2apic enabled Jan 17 12:32:29.028702 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:32:29.028717 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 17 12:32:29.028730 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Jan 17 12:32:29.028743 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:32:29.028768 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:32:29.028781 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:32:29.028793 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:32:29.028805 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:32:29.028817 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:32:29.028841 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:32:29.028853 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 12:32:29.028866 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:32:29.028878 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:32:29.028915 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:32:29.028929 kernel: MMIO Stale Data: Unknown: No mitigations Jan 17 12:32:29.028941 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 17 12:32:29.028955 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:32:29.028968 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:32:29.028981 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:32:29.028993 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:32:29.029011 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:32:29.029024 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:32:29.029036 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:32:29.029049 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:32:29.029061 kernel: landlock: Up and running. Jan 17 12:32:29.029074 kernel: SELinux: Initializing. Jan 17 12:32:29.029086 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:32:29.029099 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:32:29.029112 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 17 12:32:29.029124 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 12:32:29.029137 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 12:32:29.029155 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 12:32:29.029168 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 17 12:32:29.029180 kernel: signal: max sigframe size: 1776 Jan 17 12:32:29.029193 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:32:29.029206 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:32:29.029219 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:32:29.029232 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:32:29.029244 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:32:29.029257 kernel: .... node #0, CPUs: #1 Jan 17 12:32:29.029274 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 17 12:32:29.029287 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:32:29.029300 kernel: smpboot: Max logical packages: 16 Jan 17 12:32:29.029312 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Jan 17 12:32:29.029325 kernel: devtmpfs: initialized Jan 17 12:32:29.029337 kernel: x86/mm: Memory block size: 128MB Jan 17 12:32:29.029350 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:32:29.029363 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 17 12:32:29.029376 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:32:29.029392 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:32:29.029405 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:32:29.029418 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:32:29.029430 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:32:29.029443 kernel: audit: type=2000 audit(1737117147.709:1): state=initialized audit_enabled=0 res=1 Jan 17 12:32:29.029455 kernel: cpuidle: using governor menu Jan 17 12:32:29.029468 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:32:29.029481 kernel: dca service started, version 1.12.1 Jan 17 12:32:29.029494 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:32:29.029511 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 12:32:29.029524 kernel: PCI: Using configuration type 1 for base access Jan 17 12:32:29.029537 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:32:29.029549 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:32:29.029562 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:32:29.029575 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:32:29.029587 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:32:29.029600 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:32:29.029612 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:32:29.029630 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:32:29.029643 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:32:29.029656 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:32:29.033704 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:32:29.033724 kernel: ACPI: Interpreter enabled Jan 17 12:32:29.033742 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:32:29.033755 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:32:29.033768 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:32:29.033781 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:32:29.033805 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:32:29.033818 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:32:29.034072 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:32:29.034253 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 12:32:29.034420 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 12:32:29.034440 kernel: PCI host bridge to bus 0000:00 Jan 17 12:32:29.034644 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:32:29.034846 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:32:29.035012 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:32:29.035163 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 17 12:32:29.035312 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:32:29.035461 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 17 12:32:29.035610 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:32:29.037883 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:32:29.038126 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 17 12:32:29.038298 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 17 12:32:29.038470 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 17 12:32:29.038639 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 17 12:32:29.041332 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:32:29.041556 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 12:32:29.041809 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 17 12:32:29.042015 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 12:32:29.042183 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 17 12:32:29.042378 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 12:32:29.042550 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 17 12:32:29.045070 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 12:32:29.045265 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 17 12:32:29.045469 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 12:32:29.045637 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 17 12:32:29.045879 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 12:32:29.046067 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 17 12:32:29.046298 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 12:32:29.046479 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 17 12:32:29.047564 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 12:32:29.047784 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 17 12:32:29.048022 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:32:29.048189 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 17 12:32:29.048379 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 17 12:32:29.048543 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 17 12:32:29.050736 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 17 12:32:29.050984 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:32:29.051157 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:32:29.051333 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 17 12:32:29.051522 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 17 12:32:29.052889 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:32:29.053103 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:32:29.053327 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:32:29.053517 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 17 12:32:29.053726 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 17 12:32:29.053939 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:32:29.054109 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 12:32:29.054330 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 17 12:32:29.057952 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 17 12:32:29.058133 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 12:32:29.058305 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 12:32:29.058471 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 12:32:29.058747 kernel: pci_bus 0000:02: extended config space not accessible Jan 17 12:32:29.058973 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 17 12:32:29.059164 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 17 12:32:29.059335 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 12:32:29.059505 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 12:32:29.060758 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 12:32:29.060954 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 17 12:32:29.061126 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 12:32:29.061298 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 12:32:29.061464 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 12:32:29.062686 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 12:32:29.062878 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 17 12:32:29.063061 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 12:32:29.063224 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 12:32:29.063386 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 12:32:29.063552 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 12:32:29.065762 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 12:32:29.065973 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 12:32:29.066147 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 12:32:29.066312 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 12:32:29.066482 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 12:32:29.066657 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 12:32:29.066860 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 12:32:29.067040 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 12:32:29.067206 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 12:32:29.067377 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 12:32:29.067538 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 12:32:29.069764 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 12:32:29.069957 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 12:32:29.070125 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 12:32:29.070145 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:32:29.070159 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:32:29.070172 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:32:29.070193 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:32:29.070206 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:32:29.070219 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:32:29.070232 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:32:29.070245 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:32:29.070258 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:32:29.070271 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:32:29.070284 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:32:29.070297 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:32:29.070314 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:32:29.070327 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:32:29.070340 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:32:29.070353 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:32:29.070366 kernel: iommu: Default domain type: Translated Jan 17 12:32:29.070380 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:32:29.070401 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:32:29.070414 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:32:29.070427 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:32:29.070445 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 17 12:32:29.070622 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:32:29.070810 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:32:29.070991 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:32:29.071011 kernel: vgaarb: loaded Jan 17 12:32:29.071025 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:32:29.071038 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:32:29.071052 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:32:29.071072 kernel: pnp: PnP ACPI init Jan 17 12:32:29.071267 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:32:29.071289 kernel: pnp: PnP ACPI: found 5 devices Jan 17 12:32:29.071309 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:32:29.071322 kernel: NET: Registered PF_INET protocol family Jan 17 12:32:29.071336 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:32:29.071349 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:32:29.071362 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:32:29.071375 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:32:29.071395 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:32:29.071412 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:32:29.071425 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:32:29.071438 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:32:29.071451 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:32:29.071464 kernel: NET: Registered PF_XDP protocol family Jan 17 12:32:29.071623 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 17 12:32:29.073859 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 17 12:32:29.074087 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 17 12:32:29.074256 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 17 12:32:29.074421 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 12:32:29.074592 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 12:32:29.074801 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 12:32:29.074984 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 12:32:29.075157 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 12:32:29.075318 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 12:32:29.075480 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 12:32:29.075644 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 17 12:32:29.077888 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 17 12:32:29.078077 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 17 12:32:29.078243 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 17 12:32:29.078416 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 17 12:32:29.078618 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 12:32:29.078818 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 12:32:29.078998 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 12:32:29.079161 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 17 12:32:29.079324 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 12:32:29.079486 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 12:32:29.079648 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 12:32:29.084862 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 17 12:32:29.085062 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 12:32:29.085233 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 12:32:29.085401 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 12:32:29.085563 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 17 12:32:29.085749 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 12:32:29.085947 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 12:32:29.086111 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 12:32:29.086273 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 17 12:32:29.086434 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 12:32:29.086621 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 12:32:29.086806 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 12:32:29.087013 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 17 12:32:29.087182 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 12:32:29.087366 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 12:32:29.087544 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 12:32:29.087759 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 17 12:32:29.087981 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 12:32:29.088159 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 12:32:29.088333 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 12:32:29.088502 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 17 12:32:29.089799 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 12:32:29.090000 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 12:32:29.090167 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 12:32:29.090335 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 17 12:32:29.090497 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 12:32:29.090692 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 12:32:29.090862 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:32:29.091027 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:32:29.091185 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:32:29.091332 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 17 12:32:29.091478 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:32:29.091623 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 17 12:32:29.095055 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 17 12:32:29.095230 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 17 12:32:29.095405 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 12:32:29.095599 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 17 12:32:29.095806 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 17 12:32:29.095999 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 17 12:32:29.096153 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 12:32:29.096360 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 17 12:32:29.096515 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 17 12:32:29.096668 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 12:32:29.103459 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 17 12:32:29.103643 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 17 12:32:29.103835 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 12:32:29.104038 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 17 12:32:29.104198 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 17 12:32:29.104356 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 12:32:29.104560 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 17 12:32:29.104749 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 17 12:32:29.104916 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 12:32:29.105091 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 17 12:32:29.105271 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 17 12:32:29.105416 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 12:32:29.105591 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 17 12:32:29.105800 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 17 12:32:29.105979 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 12:32:29.106001 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:32:29.106015 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:32:29.106029 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:32:29.106043 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 17 12:32:29.106057 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:32:29.106070 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 17 12:32:29.106084 kernel: Initialise system trusted keyrings Jan 17 12:32:29.106104 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:32:29.106118 kernel: Key type asymmetric registered Jan 17 12:32:29.106131 kernel: Asymmetric key parser 'x509' registered Jan 17 12:32:29.106145 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:32:29.106158 kernel: io scheduler mq-deadline registered Jan 17 12:32:29.106184 kernel: io scheduler kyber registered Jan 17 12:32:29.106197 kernel: io scheduler bfq registered Jan 17 12:32:29.106355 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 17 12:32:29.106529 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 17 12:32:29.106758 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 12:32:29.106943 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 17 12:32:29.107108 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 17 12:32:29.107277 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 12:32:29.107430 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 17 12:32:29.107582 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 17 12:32:29.107827 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 12:32:29.108007 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 17 12:32:29.108171 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 17 12:32:29.108335 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 12:32:29.108498 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 17 12:32:29.108660 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 17 12:32:29.108858 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 12:32:29.109035 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 17 12:32:29.109211 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 17 12:32:29.109377 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 12:32:29.109532 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 17 12:32:29.111418 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 17 12:32:29.111632 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 12:32:29.111834 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 17 12:32:29.112017 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 17 12:32:29.112193 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 12:32:29.112214 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:32:29.112228 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:32:29.112263 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 12:32:29.112281 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:32:29.112295 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:32:29.112309 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:32:29.112323 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:32:29.112337 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:32:29.112350 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:32:29.112531 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 12:32:29.114815 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 12:32:29.115010 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T12:32:28 UTC (1737117148) Jan 17 12:32:29.115168 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:32:29.115188 kernel: intel_pstate: CPU model not supported Jan 17 12:32:29.115203 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:32:29.115217 kernel: Segment Routing with IPv6 Jan 17 12:32:29.115230 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:32:29.115244 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:32:29.115264 kernel: Key type dns_resolver registered Jan 17 12:32:29.115286 kernel: IPI shorthand broadcast: enabled Jan 17 12:32:29.115301 kernel: sched_clock: Marking stable (1217027142, 247162726)->(1718329959, -254140091) Jan 17 12:32:29.115326 kernel: registered taskstats version 1 Jan 17 12:32:29.115338 kernel: Loading compiled-in X.509 certificates Jan 17 12:32:29.115351 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:32:29.115363 kernel: Key type .fscrypt registered Jan 17 12:32:29.115375 kernel: Key type fscrypt-provisioning registered Jan 17 12:32:29.115387 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:32:29.115403 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:32:29.115415 kernel: ima: No architecture policies found Jan 17 12:32:29.115428 kernel: clk: Disabling unused clocks Jan 17 12:32:29.115440 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:32:29.115453 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:32:29.115465 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:32:29.115477 kernel: Run /init as init process Jan 17 12:32:29.115489 kernel: with arguments: Jan 17 12:32:29.115501 kernel: /init Jan 17 12:32:29.115513 kernel: with environment: Jan 17 12:32:29.115529 kernel: HOME=/ Jan 17 12:32:29.115541 kernel: TERM=linux Jan 17 12:32:29.115553 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:32:29.115568 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:32:29.115584 systemd[1]: Detected virtualization kvm. Jan 17 12:32:29.115597 systemd[1]: Detected architecture x86-64. Jan 17 12:32:29.115614 systemd[1]: Running in initrd. Jan 17 12:32:29.115631 systemd[1]: No hostname configured, using default hostname. Jan 17 12:32:29.115644 systemd[1]: Hostname set to . Jan 17 12:32:29.115658 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:32:29.115683 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:32:29.115719 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:32:29.115745 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:32:29.115761 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:32:29.115775 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:32:29.115808 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:32:29.115829 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:32:29.115845 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:32:29.115860 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:32:29.115875 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:32:29.115907 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:32:29.115924 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:32:29.115944 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:32:29.115958 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:32:29.115972 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:32:29.115986 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:32:29.116001 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:32:29.116015 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:32:29.116030 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:32:29.116044 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:32:29.116059 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:32:29.116078 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:32:29.116093 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:32:29.116112 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:32:29.116126 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:32:29.116141 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:32:29.116155 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:32:29.116170 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:32:29.116185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:32:29.116203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:32:29.116279 systemd-journald[201]: Collecting audit messages is disabled. Jan 17 12:32:29.116322 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:32:29.116336 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:32:29.116354 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:32:29.116368 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:32:29.116382 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:32:29.116395 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:32:29.116413 systemd-journald[201]: Journal started Jan 17 12:32:29.116439 systemd-journald[201]: Runtime Journal (/run/log/journal/fa24acb52ef94e55abf6946601994b72) is 4.7M, max 38.0M, 33.2M free. Jan 17 12:32:29.052037 systemd-modules-load[202]: Inserted module 'overlay' Jan 17 12:32:29.125486 kernel: Bridge firewalling registered Jan 17 12:32:29.124916 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 17 12:32:29.128162 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:32:29.129414 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:32:29.130487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:32:29.146914 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:32:29.154110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:32:29.160402 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:32:29.169769 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:32:29.173085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:32:29.183971 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:32:29.188164 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:32:29.190437 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:32:29.195909 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:32:29.198860 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:32:29.218832 dracut-cmdline[234]: dracut-dracut-053 Jan 17 12:32:29.227015 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:32:29.248082 systemd-resolved[235]: Positive Trust Anchors: Jan 17 12:32:29.248103 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:32:29.248149 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:32:29.252379 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 17 12:32:29.254342 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:32:29.255449 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:32:29.345749 kernel: SCSI subsystem initialized Jan 17 12:32:29.357746 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:32:29.371733 kernel: iscsi: registered transport (tcp) Jan 17 12:32:29.398284 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:32:29.398337 kernel: QLogic iSCSI HBA Driver Jan 17 12:32:29.453903 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:32:29.466335 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:32:29.497609 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:32:29.497717 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:32:29.500697 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:32:29.549723 kernel: raid6: sse2x4 gen() 12524 MB/s Jan 17 12:32:29.567754 kernel: raid6: sse2x2 gen() 8465 MB/s Jan 17 12:32:29.586597 kernel: raid6: sse2x1 gen() 8542 MB/s Jan 17 12:32:29.586709 kernel: raid6: using algorithm sse2x4 gen() 12524 MB/s Jan 17 12:32:29.605437 kernel: raid6: .... xor() 7342 MB/s, rmw enabled Jan 17 12:32:29.605493 kernel: raid6: using ssse3x2 recovery algorithm Jan 17 12:32:29.632753 kernel: xor: automatically using best checksumming function avx Jan 17 12:32:29.829836 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:32:29.846309 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:32:29.854964 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:32:29.874389 systemd-udevd[418]: Using default interface naming scheme 'v255'. Jan 17 12:32:29.881710 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:32:29.890836 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:32:29.915973 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 17 12:32:29.956571 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:32:29.962925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:32:30.075593 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:32:30.083855 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:32:30.110749 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:32:30.112774 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:32:30.113882 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:32:30.116153 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:32:30.123648 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:32:30.155068 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:32:30.196803 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 17 12:32:30.254106 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 12:32:30.254359 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:32:30.254381 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:32:30.254400 kernel: GPT:17805311 != 125829119 Jan 17 12:32:30.254417 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:32:30.254434 kernel: GPT:17805311 != 125829119 Jan 17 12:32:30.254451 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:32:30.254479 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:32:30.254497 kernel: AVX version of gcm_enc/dec engaged. Jan 17 12:32:30.254514 kernel: AES CTR mode by8 optimization enabled Jan 17 12:32:30.251465 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:32:30.266421 kernel: ACPI: bus type USB registered Jan 17 12:32:30.266477 kernel: usbcore: registered new interface driver usbfs Jan 17 12:32:30.266497 kernel: usbcore: registered new interface driver hub Jan 17 12:32:30.266515 kernel: usbcore: registered new device driver usb Jan 17 12:32:30.251655 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:32:30.254841 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:32:30.274263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:32:30.274465 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:32:30.275944 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:32:30.286624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:32:30.312775 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 12:32:30.313046 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 17 12:32:30.313285 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 12:32:30.313496 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 12:32:30.313709 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 17 12:32:30.313961 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 17 12:32:30.316558 kernel: hub 1-0:1.0: USB hub found Jan 17 12:32:30.318497 kernel: hub 1-0:1.0: 4 ports detected Jan 17 12:32:30.319896 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 12:32:30.320129 kernel: hub 2-0:1.0: USB hub found Jan 17 12:32:30.320371 kernel: hub 2-0:1.0: 4 ports detected Jan 17 12:32:30.337080 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (469) Jan 17 12:32:30.348716 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) Jan 17 12:32:30.402694 kernel: libata version 3.00 loaded. Jan 17 12:32:30.408689 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:32:30.422764 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:32:30.422789 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:32:30.423031 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:32:30.423303 kernel: scsi host0: ahci Jan 17 12:32:30.423526 kernel: scsi host1: ahci Jan 17 12:32:30.423763 kernel: scsi host2: ahci Jan 17 12:32:30.424011 kernel: scsi host3: ahci Jan 17 12:32:30.424222 kernel: scsi host4: ahci Jan 17 12:32:30.424420 kernel: scsi host5: ahci Jan 17 12:32:30.424601 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 17 12:32:30.424628 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 17 12:32:30.424646 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 17 12:32:30.424662 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 17 12:32:30.424679 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 17 12:32:30.424708 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 17 12:32:30.409848 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:32:30.479443 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:32:30.487782 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:32:30.495202 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:32:30.502614 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:32:30.503553 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:32:30.517910 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:32:30.521641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:32:30.529170 disk-uuid[561]: Primary Header is updated. Jan 17 12:32:30.529170 disk-uuid[561]: Secondary Entries is updated. Jan 17 12:32:30.529170 disk-uuid[561]: Secondary Header is updated. Jan 17 12:32:30.534811 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:32:30.541796 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 12:32:30.546753 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:32:30.559078 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:32:30.689732 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:32:30.734551 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:32:30.734614 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:32:30.734925 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 12:32:30.736993 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:32:30.739712 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 12:32:30.743742 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:32:30.754120 kernel: usbcore: registered new interface driver usbhid Jan 17 12:32:30.754167 kernel: usbhid: USB HID core driver Jan 17 12:32:30.763494 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 17 12:32:30.763566 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 17 12:32:31.555741 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:32:31.557175 disk-uuid[562]: The operation has completed successfully. Jan 17 12:32:31.601293 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:32:31.601469 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:32:31.631957 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:32:31.638376 sh[585]: Success Jan 17 12:32:31.656721 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 17 12:32:31.738495 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:32:31.739579 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:32:31.743026 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:32:31.775036 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:32:31.775115 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:32:31.775152 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:32:31.777969 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:32:31.780747 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:32:31.789411 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:32:31.791244 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:32:31.804168 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:32:31.807885 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:32:31.824123 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:32:31.824191 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:32:31.825871 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:32:31.831839 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:32:31.845253 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:32:31.850497 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:32:31.857129 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:32:31.861950 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:32:31.990403 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:32:32.000949 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:32:32.014143 ignition[673]: Ignition 2.19.0 Jan 17 12:32:32.014171 ignition[673]: Stage: fetch-offline Jan 17 12:32:32.014253 ignition[673]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:32:32.018110 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:32:32.014284 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:32:32.014478 ignition[673]: parsed url from cmdline: "" Jan 17 12:32:32.014484 ignition[673]: no config URL provided Jan 17 12:32:32.014494 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:32:32.014509 ignition[673]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:32:32.014517 ignition[673]: failed to fetch config: resource requires networking Jan 17 12:32:32.014885 ignition[673]: Ignition finished successfully Jan 17 12:32:32.045462 systemd-networkd[775]: lo: Link UP Jan 17 12:32:32.045477 systemd-networkd[775]: lo: Gained carrier Jan 17 12:32:32.047926 systemd-networkd[775]: Enumeration completed Jan 17 12:32:32.048044 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:32:32.049944 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:32:32.049949 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:32:32.050931 systemd[1]: Reached target network.target - Network. Jan 17 12:32:32.052727 systemd-networkd[775]: eth0: Link UP Jan 17 12:32:32.052733 systemd-networkd[775]: eth0: Gained carrier Jan 17 12:32:32.052744 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:32:32.058865 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:32:32.071822 systemd-networkd[775]: eth0: DHCPv4 address 10.230.31.94/30, gateway 10.230.31.93 acquired from 10.230.31.93 Jan 17 12:32:32.080182 ignition[779]: Ignition 2.19.0 Jan 17 12:32:32.080198 ignition[779]: Stage: fetch Jan 17 12:32:32.080462 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:32:32.080492 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:32:32.080634 ignition[779]: parsed url from cmdline: "" Jan 17 12:32:32.080641 ignition[779]: no config URL provided Jan 17 12:32:32.080650 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:32:32.080666 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:32:32.081755 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 17 12:32:32.081790 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 17 12:32:32.081834 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 17 12:32:32.101916 ignition[779]: GET result: OK Jan 17 12:32:32.102406 ignition[779]: parsing config with SHA512: a1f0b4dc9bcfb9d89d799904347672d5ea180d643380ddc49ac99c37560410022f02662b0daca4e1a209a64bcb55f98fcd33d7149f5421b7a97911206f66e44e Jan 17 12:32:32.108115 unknown[779]: fetched base config from "system" Jan 17 12:32:32.108133 unknown[779]: fetched base config from "system" Jan 17 12:32:32.108921 ignition[779]: fetch: fetch complete Jan 17 12:32:32.108155 unknown[779]: fetched user config from "openstack" Jan 17 12:32:32.108931 ignition[779]: fetch: fetch passed Jan 17 12:32:32.110862 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:32:32.108999 ignition[779]: Ignition finished successfully Jan 17 12:32:32.117987 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:32:32.139447 ignition[787]: Ignition 2.19.0 Jan 17 12:32:32.139478 ignition[787]: Stage: kargs Jan 17 12:32:32.139765 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:32:32.139895 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:32:32.142386 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:32:32.141089 ignition[787]: kargs: kargs passed Jan 17 12:32:32.141166 ignition[787]: Ignition finished successfully Jan 17 12:32:32.151940 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:32:32.170078 ignition[794]: Ignition 2.19.0 Jan 17 12:32:32.170095 ignition[794]: Stage: disks Jan 17 12:32:32.170333 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:32:32.170352 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:32:32.174377 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:32:32.173245 ignition[794]: disks: disks passed Jan 17 12:32:32.176552 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:32:32.173313 ignition[794]: Ignition finished successfully Jan 17 12:32:32.177440 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:32:32.178926 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:32:32.180477 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:32:32.181836 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:32:32.194921 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:32:32.213138 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:32:32.217475 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:32:32.227885 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:32:32.347705 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:32:32.348368 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:32:32.349857 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:32:32.355821 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:32:32.358717 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:32:32.362193 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:32:32.364727 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 17 12:32:32.381460 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Jan 17 12:32:32.381513 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:32:32.381535 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:32:32.381553 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:32:32.366733 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:32:32.366814 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:32:32.372361 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:32:32.388728 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:32:32.394836 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:32:32.403269 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:32:32.496130 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:32:32.504955 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:32:32.513923 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:32:32.520035 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:32:32.624894 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:32:32.629886 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:32:32.633919 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:32:32.647699 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:32:32.671970 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:32:32.687325 ignition[928]: INFO : Ignition 2.19.0 Jan 17 12:32:32.687325 ignition[928]: INFO : Stage: mount Jan 17 12:32:32.689175 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:32:32.689175 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:32:32.691145 ignition[928]: INFO : mount: mount passed Jan 17 12:32:32.691145 ignition[928]: INFO : Ignition finished successfully Jan 17 12:32:32.690693 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:32:32.772032 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:32:33.165043 systemd-networkd[775]: eth0: Gained IPv6LL Jan 17 12:32:34.672599 systemd-networkd[775]: eth0: Ignoring DHCPv6 address 2a02:1348:179:87d7:24:19ff:fee6:1f5e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:87d7:24:19ff:fee6:1f5e/64 assigned by NDisc. Jan 17 12:32:34.672629 systemd-networkd[775]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 12:32:39.563942 coreos-metadata[812]: Jan 17 12:32:39.563 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:32:39.587413 coreos-metadata[812]: Jan 17 12:32:39.587 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 12:32:39.599287 coreos-metadata[812]: Jan 17 12:32:39.599 INFO Fetch successful Jan 17 12:32:39.600545 coreos-metadata[812]: Jan 17 12:32:39.600 INFO wrote hostname srv-hkhka.gb1.brightbox.com to /sysroot/etc/hostname Jan 17 12:32:39.603017 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 17 12:32:39.604396 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 17 12:32:39.612780 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:32:39.637872 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:32:39.666687 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (945) Jan 17 12:32:39.666744 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:32:39.667846 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:32:39.669867 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:32:39.675879 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:32:39.678205 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:32:39.709867 ignition[963]: INFO : Ignition 2.19.0 Jan 17 12:32:39.709867 ignition[963]: INFO : Stage: files Jan 17 12:32:39.711748 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:32:39.711748 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:32:39.711748 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:32:39.714706 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:32:39.714706 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:32:39.716769 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:32:39.717799 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:32:39.717799 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:32:39.717336 unknown[963]: wrote ssh authorized keys file for user: core Jan 17 12:32:39.720823 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:32:39.720823 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:32:39.720823 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:32:39.720823 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:32:39.975210 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:32:40.655562 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:32:40.662964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:32:41.190699 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:32:44.990239 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:32:44.990239 ignition[963]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:32:44.993873 ignition[963]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:32:44.993873 ignition[963]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:32:44.993873 ignition[963]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:32:44.993873 ignition[963]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:32:44.993873 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:32:44.993873 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:32:44.993873 ignition[963]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:32:44.993873 ignition[963]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:32:44.993873 ignition[963]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:32:44.993873 ignition[963]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:32:44.993873 ignition[963]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:32:44.993873 ignition[963]: INFO : files: files passed Jan 17 12:32:44.993873 ignition[963]: INFO : Ignition finished successfully Jan 17 12:32:44.996823 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:32:45.007079 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:32:45.016945 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:32:45.027314 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:32:45.027560 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:32:45.037741 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:32:45.037741 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:32:45.040626 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:32:45.042122 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:32:45.043895 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:32:45.050972 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:32:45.083535 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:32:45.083753 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:32:45.086131 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:32:45.087539 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:32:45.089425 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:32:45.095889 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:32:45.117142 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:32:45.128926 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:32:45.142107 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:32:45.144094 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:32:45.146074 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:32:45.146855 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:32:45.147016 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:32:45.149176 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:32:45.150227 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:32:45.151744 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:32:45.154055 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:32:45.155660 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:32:45.157352 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:32:45.159000 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:32:45.160593 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:32:45.162173 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:32:45.163806 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:32:45.165145 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:32:45.165365 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:32:45.167235 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:32:45.168287 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:32:45.169927 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:32:45.170116 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:32:45.171714 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:32:45.171885 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:32:45.173947 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:32:45.174115 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:32:45.175080 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:32:45.175256 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:32:45.183968 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:32:45.186416 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:32:45.194787 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:32:45.196051 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:32:45.197918 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:32:45.198120 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:32:45.207133 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:32:45.208151 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:32:45.226833 ignition[1015]: INFO : Ignition 2.19.0 Jan 17 12:32:45.226833 ignition[1015]: INFO : Stage: umount Jan 17 12:32:45.228822 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:32:45.228822 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:32:45.228822 ignition[1015]: INFO : umount: umount passed Jan 17 12:32:45.228822 ignition[1015]: INFO : Ignition finished successfully Jan 17 12:32:45.233813 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:32:45.234741 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:32:45.235857 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:32:45.237341 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:32:45.237510 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:32:45.238455 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:32:45.238535 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:32:45.239951 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:32:45.240034 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:32:45.241394 systemd[1]: Stopped target network.target - Network. Jan 17 12:32:45.242829 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:32:45.242921 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:32:45.244433 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:32:45.246753 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:32:45.251750 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:32:45.252693 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:32:45.254512 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:32:45.256014 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:32:45.256095 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:32:45.257409 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:32:45.257486 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:32:45.258867 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:32:45.258941 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:32:45.260340 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:32:45.260444 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:32:45.262208 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:32:45.264864 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:32:45.266745 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:32:45.267060 systemd-networkd[775]: eth0: DHCPv6 lease lost Jan 17 12:32:45.268416 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:32:45.270486 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:32:45.270678 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:32:45.274498 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:32:45.274567 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:32:45.275536 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:32:45.275606 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:32:45.283931 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:32:45.285642 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:32:45.286641 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:32:45.288969 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:32:45.290478 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:32:45.292059 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:32:45.296188 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:32:45.296456 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:32:45.304526 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:32:45.305528 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:32:45.307641 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:32:45.307760 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:32:45.309567 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:32:45.309643 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:32:45.313480 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:32:45.313551 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:32:45.315170 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:32:45.315276 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:32:45.322974 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:32:45.323885 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:32:45.323991 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:32:45.327146 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:32:45.327219 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:32:45.330947 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:32:45.331045 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:32:45.331889 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:32:45.331954 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:32:45.334095 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:32:45.334167 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:32:45.335569 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:32:45.335636 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:32:45.337277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:32:45.337351 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:32:45.339763 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:32:45.339939 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:32:45.342243 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:32:45.342434 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:32:45.344322 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:32:45.352918 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:32:45.364666 systemd[1]: Switching root. Jan 17 12:32:45.400092 systemd-journald[201]: Journal stopped Jan 17 12:32:46.967284 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 17 12:32:46.967499 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:32:46.967539 kernel: SELinux: policy capability open_perms=1 Jan 17 12:32:46.967575 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:32:46.967600 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:32:46.967641 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:32:46.967685 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:32:46.967723 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:32:46.967743 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:32:46.967766 kernel: audit: type=1403 audit(1737117165.730:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:32:46.967809 systemd[1]: Successfully loaded SELinux policy in 52.090ms. Jan 17 12:32:46.967877 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.683ms. Jan 17 12:32:46.967911 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:32:46.967949 systemd[1]: Detected virtualization kvm. Jan 17 12:32:46.967977 systemd[1]: Detected architecture x86-64. Jan 17 12:32:46.968003 systemd[1]: Detected first boot. Jan 17 12:32:46.968039 systemd[1]: Hostname set to . Jan 17 12:32:46.968060 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:32:46.968087 zram_generator::config[1078]: No configuration found. Jan 17 12:32:46.968114 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:32:46.968134 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:32:46.968155 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:32:46.968185 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:32:46.968208 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:32:46.968254 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:32:46.968282 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:32:46.968342 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:32:46.968379 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:32:46.968401 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:32:46.968421 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:32:46.968440 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:32:46.968466 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:32:46.968487 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:32:46.968522 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:32:46.968543 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:32:46.968569 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:32:46.968590 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:32:46.968627 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:32:46.968660 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:32:46.971279 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:32:46.971339 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:32:46.971370 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:32:46.971396 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:32:46.971422 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:32:46.971443 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:32:46.971487 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:32:46.971524 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:32:46.971552 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:32:46.971579 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:32:46.971645 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:32:46.971715 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:32:46.971742 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:32:46.971775 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:32:46.971808 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:32:46.971843 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:32:46.971865 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:32:46.971885 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:32:46.971911 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:32:46.971944 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:32:46.971972 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:32:46.971999 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:32:46.972041 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:32:46.972060 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:32:46.972103 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:32:46.972124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:32:46.972169 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:32:46.972189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:32:46.972221 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:32:46.972248 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:32:46.972276 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:32:46.972296 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:32:46.972343 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:32:46.972365 kernel: fuse: init (API version 7.39) Jan 17 12:32:46.972384 kernel: loop: module loaded Jan 17 12:32:46.972408 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:32:46.972427 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:32:46.972489 systemd-journald[1185]: Collecting audit messages is disabled. Jan 17 12:32:46.972540 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:32:46.972569 kernel: ACPI: bus type drm_connector registered Jan 17 12:32:46.972615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:32:46.972642 systemd-journald[1185]: Journal started Jan 17 12:32:46.972702 systemd-journald[1185]: Runtime Journal (/run/log/journal/fa24acb52ef94e55abf6946601994b72) is 4.7M, max 38.0M, 33.2M free. Jan 17 12:32:46.983904 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:32:46.986172 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:32:46.987079 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:32:46.988008 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:32:46.988890 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:32:46.989773 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:32:46.990638 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:32:46.991847 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:32:46.993085 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:32:46.994422 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:32:46.994730 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:32:46.996160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:32:46.996456 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:32:46.997656 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:32:46.998049 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:32:46.999213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:32:46.999464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:32:47.000912 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:32:47.001158 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:32:47.002502 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:32:47.004946 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:32:47.007155 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:32:47.009290 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:32:47.012301 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:32:47.028179 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:32:47.035808 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:32:47.050760 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:32:47.051606 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:32:47.061850 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:32:47.074841 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:32:47.075691 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:32:47.077899 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:32:47.079345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:32:47.091886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:32:47.104906 systemd-journald[1185]: Time spent on flushing to /var/log/journal/fa24acb52ef94e55abf6946601994b72 is 30.041ms for 1125 entries. Jan 17 12:32:47.104906 systemd-journald[1185]: System Journal (/var/log/journal/fa24acb52ef94e55abf6946601994b72) is 8.0M, max 584.8M, 576.8M free. Jan 17 12:32:47.164294 systemd-journald[1185]: Received client request to flush runtime journal. Jan 17 12:32:47.105863 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:32:47.113220 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:32:47.114176 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:32:47.121484 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:32:47.124643 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:32:47.147516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:32:47.166135 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:32:47.189804 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jan 17 12:32:47.189826 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jan 17 12:32:47.198926 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:32:47.206969 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:32:47.247212 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:32:47.258867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:32:47.271911 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:32:47.283880 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:32:47.303978 udevadm[1252]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:32:47.308139 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 17 12:32:47.308164 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 17 12:32:47.316278 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:32:47.874814 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:32:47.882888 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:32:47.931480 systemd-udevd[1258]: Using default interface naming scheme 'v255'. Jan 17 12:32:47.960347 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:32:47.972951 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:32:48.007931 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:32:48.035598 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:32:48.113009 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:32:48.116280 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:32:48.165709 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 12:32:48.194701 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1266) Jan 17 12:32:48.210830 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:32:48.228154 systemd-networkd[1263]: lo: Link UP Jan 17 12:32:48.228770 systemd-networkd[1263]: lo: Gained carrier Jan 17 12:32:48.235211 systemd-networkd[1263]: Enumeration completed Jan 17 12:32:48.236911 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:32:48.237092 systemd-networkd[1263]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:32:48.237101 systemd-networkd[1263]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:32:48.240744 systemd-networkd[1263]: eth0: Link UP Jan 17 12:32:48.240759 systemd-networkd[1263]: eth0: Gained carrier Jan 17 12:32:48.240776 systemd-networkd[1263]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:32:48.244894 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:32:48.252014 systemd-networkd[1263]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:32:48.261206 systemd-networkd[1263]: eth0: DHCPv4 address 10.230.31.94/30, gateway 10.230.31.93 acquired from 10.230.31.93 Jan 17 12:32:48.316690 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:32:48.323382 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:32:48.323714 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:32:48.361714 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 12:32:48.403437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:32:48.417008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:32:48.600065 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:32:48.633977 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:32:48.635371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:32:48.655730 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:32:48.686540 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:32:48.687667 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:32:48.691926 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:32:48.702954 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:32:48.735052 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:32:48.737314 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:32:48.738399 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:32:48.738440 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:32:48.739581 systemd[1]: Reached target machines.target - Containers. Jan 17 12:32:48.742326 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:32:48.749881 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:32:48.752476 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:32:48.753468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:32:48.758873 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:32:48.770190 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:32:48.777190 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:32:48.783034 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:32:48.802361 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:32:48.815482 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:32:48.817837 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:32:48.816745 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:32:48.850009 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:32:48.869360 kernel: loop1: detected capacity change from 0 to 8 Jan 17 12:32:48.890120 kernel: loop2: detected capacity change from 0 to 211296 Jan 17 12:32:48.937711 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:32:48.995709 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:32:49.019982 kernel: loop5: detected capacity change from 0 to 8 Jan 17 12:32:49.020084 kernel: loop6: detected capacity change from 0 to 211296 Jan 17 12:32:49.035744 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 12:32:49.051833 (sd-merge)[1322]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 17 12:32:49.054698 (sd-merge)[1322]: Merged extensions into '/usr'. Jan 17 12:32:49.061066 systemd[1]: Reloading requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:32:49.061105 systemd[1]: Reloading... Jan 17 12:32:49.158751 zram_generator::config[1347]: No configuration found. Jan 17 12:32:49.397348 ldconfig[1305]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:32:49.403923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:32:49.496286 systemd[1]: Reloading finished in 434 ms. Jan 17 12:32:49.519281 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:32:49.520780 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:32:49.538108 systemd[1]: Starting ensure-sysext.service... Jan 17 12:32:49.552913 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:32:49.558594 systemd[1]: Reloading requested from client PID 1413 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:32:49.558633 systemd[1]: Reloading... Jan 17 12:32:49.599953 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:32:49.600585 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:32:49.602993 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:32:49.603616 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jan 17 12:32:49.603779 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jan 17 12:32:49.609157 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:32:49.609928 systemd-tmpfiles[1414]: Skipping /boot Jan 17 12:32:49.628004 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:32:49.628207 systemd-tmpfiles[1414]: Skipping /boot Jan 17 12:32:49.677956 systemd-networkd[1263]: eth0: Gained IPv6LL Jan 17 12:32:49.684766 zram_generator::config[1443]: No configuration found. Jan 17 12:32:49.882083 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:32:49.971916 systemd[1]: Reloading finished in 412 ms. Jan 17 12:32:49.996130 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:32:50.009885 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:32:50.023989 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:32:50.040144 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:32:50.045131 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:32:50.048863 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:32:50.061202 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:32:50.084538 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:32:50.084876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:32:50.095431 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:32:50.104980 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:32:50.119962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:32:50.122596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:32:50.126117 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:32:50.152215 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:32:50.152486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:32:50.161038 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:32:50.163743 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:32:50.164032 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:32:50.172409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:32:50.172923 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:32:50.173196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:32:50.173371 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:32:50.173529 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:32:50.174257 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:32:50.175491 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:32:50.185286 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:32:50.185624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:32:50.195477 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:32:50.196123 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:32:50.207854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:32:50.211533 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:32:50.220841 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:32:50.221870 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:32:50.221960 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:32:50.221993 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:32:50.223038 systemd[1]: Finished ensure-sysext.service. Jan 17 12:32:50.231898 augenrules[1549]: No rules Jan 17 12:32:50.233430 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:32:50.235381 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:32:50.236660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:32:50.237025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:32:50.238413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:32:50.238643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:32:50.246167 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:32:50.248939 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:32:50.254511 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:32:50.254966 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:32:50.261995 systemd-resolved[1514]: Positive Trust Anchors: Jan 17 12:32:50.262259 systemd-resolved[1514]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:32:50.262306 systemd-resolved[1514]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:32:50.263924 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:32:50.270031 systemd-resolved[1514]: Using system hostname 'srv-hkhka.gb1.brightbox.com'. Jan 17 12:32:50.277955 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:32:50.279720 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:32:50.281156 systemd[1]: Reached target network.target - Network. Jan 17 12:32:50.282015 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:32:50.283073 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:32:50.299896 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:32:50.363526 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:32:50.364899 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:32:50.365811 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:32:50.366759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:32:50.367602 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:32:50.368413 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:32:50.368462 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:32:50.369170 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:32:50.370164 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:32:50.371078 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:32:50.371916 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:32:50.373794 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:32:50.376766 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:32:50.379919 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:32:50.380758 systemd-networkd[1263]: eth0: Ignoring DHCPv6 address 2a02:1348:179:87d7:24:19ff:fee6:1f5e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:87d7:24:19ff:fee6:1f5e/64 assigned by NDisc. Jan 17 12:32:50.380883 systemd-networkd[1263]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 12:32:50.382941 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:32:50.383773 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:32:50.384521 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:32:50.385494 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:32:50.385556 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:32:50.385605 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:32:50.388412 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:32:50.392898 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:32:50.401824 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:32:50.408155 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:32:50.413038 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:32:50.417796 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:32:50.432393 jq[1576]: false Jan 17 12:32:50.434815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:32:50.447412 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:32:50.450652 dbus-daemon[1575]: [system] SELinux support is enabled Jan 17 12:32:50.455046 dbus-daemon[1575]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1263 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:32:50.460872 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:32:50.470833 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:32:50.476380 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:32:50.484466 extend-filesystems[1579]: Found loop4 Jan 17 12:32:50.487371 extend-filesystems[1579]: Found loop5 Jan 17 12:32:50.487371 extend-filesystems[1579]: Found loop6 Jan 17 12:32:50.487371 extend-filesystems[1579]: Found loop7 Jan 17 12:32:50.487371 extend-filesystems[1579]: Found vda Jan 17 12:32:50.487371 extend-filesystems[1579]: Found vda1 Jan 17 12:32:50.487371 extend-filesystems[1579]: Found vda2 Jan 17 12:32:50.487371 extend-filesystems[1579]: Found vda3 Jan 17 12:32:50.487371 extend-filesystems[1579]: Found usr Jan 17 12:32:50.487371 extend-filesystems[1579]: Found vda4 Jan 17 12:32:50.487371 extend-filesystems[1579]: Found vda6 Jan 17 12:32:50.487371 extend-filesystems[1579]: Found vda7 Jan 17 12:32:50.487371 extend-filesystems[1579]: Found vda9 Jan 17 12:32:50.487371 extend-filesystems[1579]: Checking size of /dev/vda9 Jan 17 12:32:50.550707 extend-filesystems[1579]: Resized partition /dev/vda9 Jan 17 12:32:50.557750 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 17 12:32:50.489870 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:32:50.558136 extend-filesystems[1606]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:32:50.518056 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:32:50.526288 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:32:50.536866 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:32:50.573854 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:32:50.576284 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:32:50.593325 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1262) Jan 17 12:32:50.592310 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:32:50.592712 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:32:50.594159 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:32:50.594524 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:32:50.599172 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:32:50.608408 jq[1611]: true Jan 17 12:32:50.615146 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:32:50.615524 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:32:50.674194 update_engine[1605]: I20250117 12:32:50.674042 1605 main.cc:92] Flatcar Update Engine starting Jan 17 12:32:50.677354 (ntainerd)[1621]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:32:50.682340 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:32:50.682388 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:32:50.684827 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:32:50.684862 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:32:50.690939 jq[1620]: true Jan 17 12:32:50.694052 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:32:50.711061 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:32:50.721719 update_engine[1605]: I20250117 12:32:50.718580 1605 update_check_scheduler.cc:74] Next update check in 6m36s Jan 17 12:32:50.725866 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:32:50.730036 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:32:50.745804 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:32:50.761402 tar[1617]: linux-amd64/helm Jan 17 12:32:50.978725 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 12:32:51.004318 bash[1651]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:32:50.982761 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:32:50.991875 systemd[1]: Starting sshkeys.service... Jan 17 12:32:51.007186 extend-filesystems[1606]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:32:51.007186 extend-filesystems[1606]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 12:32:51.007186 extend-filesystems[1606]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 12:32:51.025806 extend-filesystems[1579]: Resized filesystem in /dev/vda9 Jan 17 12:32:51.010106 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:32:51.010479 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:32:51.029928 systemd-logind[1600]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 12:32:51.029972 systemd-logind[1600]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:32:51.030348 systemd-logind[1600]: New seat seat0. Jan 17 12:32:51.048412 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:32:51.060983 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:32:51.064101 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:32:51.254506 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:32:51.254744 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:32:51.260920 dbus-daemon[1575]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1635 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:32:51.276065 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:32:51.297162 locksmithd[1637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:32:51.327961 polkitd[1684]: Started polkitd version 121 Jan 17 12:32:51.339307 polkitd[1684]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:32:51.342023 polkitd[1684]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:32:51.343048 polkitd[1684]: Finished loading, compiling and executing 2 rules Jan 17 12:32:51.344255 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:32:51.344655 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:32:51.347887 polkitd[1684]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:32:51.355221 containerd[1621]: time="2025-01-17T12:32:51.354782449Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:32:51.380195 systemd-hostnamed[1635]: Hostname set to (static) Jan 17 12:32:51.410497 sshd_keygen[1614]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:32:51.431940 containerd[1621]: time="2025-01-17T12:32:51.430798141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:32:51.437533 containerd[1621]: time="2025-01-17T12:32:51.435858875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:32:51.437533 containerd[1621]: time="2025-01-17T12:32:51.435913639Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:32:51.437533 containerd[1621]: time="2025-01-17T12:32:51.435940277Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:32:51.437533 containerd[1621]: time="2025-01-17T12:32:51.436230525Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:32:51.437533 containerd[1621]: time="2025-01-17T12:32:51.436259519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:32:51.437533 containerd[1621]: time="2025-01-17T12:32:51.436376997Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:32:51.437533 containerd[1621]: time="2025-01-17T12:32:51.436399682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:32:51.437533 containerd[1621]: time="2025-01-17T12:32:51.436747024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:32:51.437533 containerd[1621]: time="2025-01-17T12:32:51.436771963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:32:51.437533 containerd[1621]: time="2025-01-17T12:32:51.436794969Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:32:51.437533 containerd[1621]: time="2025-01-17T12:32:51.436813048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:32:51.438007 containerd[1621]: time="2025-01-17T12:32:51.437187471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:32:51.438007 containerd[1621]: time="2025-01-17T12:32:51.437583253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:32:52.248621 systemd-timesyncd[1565]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Jan 17 12:32:52.250578 containerd[1621]: time="2025-01-17T12:32:52.248757992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:32:52.250578 containerd[1621]: time="2025-01-17T12:32:52.248818272Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:32:52.248696 systemd-timesyncd[1565]: Initial clock synchronization to Fri 2025-01-17 12:32:52.247700 UTC. Jan 17 12:32:52.249944 systemd-resolved[1514]: Clock change detected. Flushing caches. Jan 17 12:32:52.254207 containerd[1621]: time="2025-01-17T12:32:52.252509485Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:32:52.254207 containerd[1621]: time="2025-01-17T12:32:52.252630064Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:32:52.258420 containerd[1621]: time="2025-01-17T12:32:52.258381173Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:32:52.259327 containerd[1621]: time="2025-01-17T12:32:52.258466464Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:32:52.259327 containerd[1621]: time="2025-01-17T12:32:52.258503033Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:32:52.259327 containerd[1621]: time="2025-01-17T12:32:52.258529070Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:32:52.259327 containerd[1621]: time="2025-01-17T12:32:52.258558394Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:32:52.259327 containerd[1621]: time="2025-01-17T12:32:52.258759976Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:32:52.259327 containerd[1621]: time="2025-01-17T12:32:52.259169694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261442603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261477252Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261499196Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261531924Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261551678Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261576730Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261599380Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261619895Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261639006Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261661696Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261682155Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261747108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261784092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262127 containerd[1621]: time="2025-01-17T12:32:52.261817922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.261838955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.261865814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.261888969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.261908204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.261927916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.261948447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.261971238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.261990531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.262015117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.262035555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.262065079Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.262106798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.262131158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.262725 containerd[1621]: time="2025-01-17T12:32:52.262149490Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:32:52.263191 containerd[1621]: time="2025-01-17T12:32:52.262226336Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:32:52.263191 containerd[1621]: time="2025-01-17T12:32:52.262260644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:32:52.264201 containerd[1621]: time="2025-01-17T12:32:52.263664652Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:32:52.264201 containerd[1621]: time="2025-01-17T12:32:52.263700905Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:32:52.264201 containerd[1621]: time="2025-01-17T12:32:52.263719394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.264201 containerd[1621]: time="2025-01-17T12:32:52.263756377Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:32:52.264201 containerd[1621]: time="2025-01-17T12:32:52.263795301Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:32:52.264201 containerd[1621]: time="2025-01-17T12:32:52.263848130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:32:52.270378 containerd[1621]: time="2025-01-17T12:32:52.267493356Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:32:52.270378 containerd[1621]: time="2025-01-17T12:32:52.267599169Z" level=info msg="Connect containerd service" Jan 17 12:32:52.270378 containerd[1621]: time="2025-01-17T12:32:52.267680310Z" level=info msg="using legacy CRI server" Jan 17 12:32:52.270378 containerd[1621]: time="2025-01-17T12:32:52.267702184Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:32:52.270378 containerd[1621]: time="2025-01-17T12:32:52.267886163Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:32:52.270378 containerd[1621]: time="2025-01-17T12:32:52.268749061Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:32:52.270378 containerd[1621]: time="2025-01-17T12:32:52.269861453Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:32:52.270378 containerd[1621]: time="2025-01-17T12:32:52.269949033Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:32:52.270378 containerd[1621]: time="2025-01-17T12:32:52.270103319Z" level=info msg="Start subscribing containerd event" Jan 17 12:32:52.270378 containerd[1621]: time="2025-01-17T12:32:52.270168802Z" level=info msg="Start recovering state" Jan 17 12:32:52.272524 containerd[1621]: time="2025-01-17T12:32:52.270273593Z" level=info msg="Start event monitor" Jan 17 12:32:52.272524 containerd[1621]: time="2025-01-17T12:32:52.270534510Z" level=info msg="Start snapshots syncer" Jan 17 12:32:52.272524 containerd[1621]: time="2025-01-17T12:32:52.270576387Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:32:52.272524 containerd[1621]: time="2025-01-17T12:32:52.270594195Z" level=info msg="Start streaming server" Jan 17 12:32:52.272524 containerd[1621]: time="2025-01-17T12:32:52.270694622Z" level=info msg="containerd successfully booted in 0.112830s" Jan 17 12:32:52.270858 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:32:52.298478 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:32:52.314028 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:32:52.325469 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:32:52.325801 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:32:52.341082 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:32:52.365747 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:32:52.380873 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:32:52.390727 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:32:52.391866 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:32:52.659455 tar[1617]: linux-amd64/LICENSE Jan 17 12:32:52.661228 tar[1617]: linux-amd64/README.md Jan 17 12:32:52.676163 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:32:52.869481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:32:52.882852 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:32:53.631876 kubelet[1730]: E0117 12:32:53.631462 1730 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:32:53.634453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:32:53.634887 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:32:54.679751 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:32:54.706267 systemd[1]: Started sshd@0-10.230.31.94:22-139.178.68.195:35672.service - OpenSSH per-connection server daemon (139.178.68.195:35672). Jan 17 12:32:55.594547 sshd[1741]: Accepted publickey for core from 139.178.68.195 port 35672 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:32:55.598538 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:32:55.617038 systemd-logind[1600]: New session 1 of user core. Jan 17 12:32:55.620420 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:32:55.626672 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:32:55.661568 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:32:55.672150 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:32:55.681241 (systemd)[1748]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:32:55.813225 systemd[1748]: Queued start job for default target default.target. Jan 17 12:32:55.813858 systemd[1748]: Created slice app.slice - User Application Slice. Jan 17 12:32:55.813890 systemd[1748]: Reached target paths.target - Paths. Jan 17 12:32:55.813912 systemd[1748]: Reached target timers.target - Timers. Jan 17 12:32:55.821406 systemd[1748]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:32:55.832356 systemd[1748]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:32:55.832547 systemd[1748]: Reached target sockets.target - Sockets. Jan 17 12:32:55.832717 systemd[1748]: Reached target basic.target - Basic System. Jan 17 12:32:55.833016 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:32:55.835033 systemd[1748]: Reached target default.target - Main User Target. Jan 17 12:32:55.835234 systemd[1748]: Startup finished in 145ms. Jan 17 12:32:55.847973 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:32:56.472731 systemd[1]: Started sshd@1-10.230.31.94:22-139.178.68.195:58994.service - OpenSSH per-connection server daemon (139.178.68.195:58994). Jan 17 12:32:57.365201 sshd[1760]: Accepted publickey for core from 139.178.68.195 port 58994 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:32:57.367432 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:32:57.377243 systemd-logind[1600]: New session 2 of user core. Jan 17 12:32:57.385759 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:32:57.450745 login[1716]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:32:57.452519 login[1715]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:32:57.458268 systemd-logind[1600]: New session 3 of user core. Jan 17 12:32:57.469849 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:32:57.473557 systemd-logind[1600]: New session 4 of user core. Jan 17 12:32:57.479970 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:32:57.984030 sshd[1760]: pam_unix(sshd:session): session closed for user core Jan 17 12:32:57.989441 systemd[1]: sshd@1-10.230.31.94:22-139.178.68.195:58994.service: Deactivated successfully. Jan 17 12:32:57.993686 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:32:57.995186 systemd-logind[1600]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:32:57.996846 systemd-logind[1600]: Removed session 2. Jan 17 12:32:58.135724 systemd[1]: Started sshd@2-10.230.31.94:22-139.178.68.195:59000.service - OpenSSH per-connection server daemon (139.178.68.195:59000). Jan 17 12:32:58.309061 coreos-metadata[1573]: Jan 17 12:32:58.308 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:32:58.336297 coreos-metadata[1573]: Jan 17 12:32:58.336 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 17 12:32:58.342440 coreos-metadata[1573]: Jan 17 12:32:58.342 INFO Fetch failed with 404: resource not found Jan 17 12:32:58.342440 coreos-metadata[1573]: Jan 17 12:32:58.342 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 12:32:58.343316 coreos-metadata[1573]: Jan 17 12:32:58.343 INFO Fetch successful Jan 17 12:32:58.343511 coreos-metadata[1573]: Jan 17 12:32:58.343 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 17 12:32:58.424538 coreos-metadata[1573]: Jan 17 12:32:58.424 INFO Fetch successful Jan 17 12:32:58.424819 coreos-metadata[1573]: Jan 17 12:32:58.424 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 17 12:32:58.442635 coreos-metadata[1573]: Jan 17 12:32:58.442 INFO Fetch successful Jan 17 12:32:58.442938 coreos-metadata[1573]: Jan 17 12:32:58.442 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 17 12:32:58.461293 coreos-metadata[1573]: Jan 17 12:32:58.461 INFO Fetch successful Jan 17 12:32:58.461551 coreos-metadata[1573]: Jan 17 12:32:58.461 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 17 12:32:58.482299 coreos-metadata[1573]: Jan 17 12:32:58.482 INFO Fetch successful Jan 17 12:32:58.510782 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:32:58.511653 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:32:59.011803 sshd[1794]: Accepted publickey for core from 139.178.68.195 port 59000 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:32:59.013884 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:32:59.020136 systemd-logind[1600]: New session 5 of user core. Jan 17 12:32:59.027817 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:32:59.084237 coreos-metadata[1662]: Jan 17 12:32:59.084 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:32:59.106714 coreos-metadata[1662]: Jan 17 12:32:59.106 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 17 12:32:59.131677 coreos-metadata[1662]: Jan 17 12:32:59.131 INFO Fetch successful Jan 17 12:32:59.131891 coreos-metadata[1662]: Jan 17 12:32:59.131 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:32:59.164374 coreos-metadata[1662]: Jan 17 12:32:59.164 INFO Fetch successful Jan 17 12:32:59.166450 unknown[1662]: wrote ssh authorized keys file for user: core Jan 17 12:32:59.187370 update-ssh-keys[1812]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:32:59.188215 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:32:59.193384 systemd[1]: Finished sshkeys.service. Jan 17 12:32:59.198928 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:32:59.199212 systemd[1]: Startup finished in 18.369s (kernel) + 12.711s (userspace) = 31.080s. Jan 17 12:32:59.631781 sshd[1794]: pam_unix(sshd:session): session closed for user core Jan 17 12:32:59.637114 systemd[1]: sshd@2-10.230.31.94:22-139.178.68.195:59000.service: Deactivated successfully. Jan 17 12:32:59.641185 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:32:59.642340 systemd-logind[1600]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:32:59.643911 systemd-logind[1600]: Removed session 5. Jan 17 12:33:03.718065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:33:03.724543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:33:03.914539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:33:03.920835 (kubelet)[1834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:33:04.006575 kubelet[1834]: E0117 12:33:04.006156 1834 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:33:04.011572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:33:04.011981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:33:09.784620 systemd[1]: Started sshd@3-10.230.31.94:22-139.178.68.195:40148.service - OpenSSH per-connection server daemon (139.178.68.195:40148). Jan 17 12:33:10.677499 sshd[1843]: Accepted publickey for core from 139.178.68.195 port 40148 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:33:10.679575 sshd[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:33:10.687834 systemd-logind[1600]: New session 6 of user core. Jan 17 12:33:10.690701 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:33:11.295698 sshd[1843]: pam_unix(sshd:session): session closed for user core Jan 17 12:33:11.299548 systemd-logind[1600]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:33:11.300208 systemd[1]: sshd@3-10.230.31.94:22-139.178.68.195:40148.service: Deactivated successfully. Jan 17 12:33:11.304767 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:33:11.305881 systemd-logind[1600]: Removed session 6. Jan 17 12:33:11.444642 systemd[1]: Started sshd@4-10.230.31.94:22-139.178.68.195:40154.service - OpenSSH per-connection server daemon (139.178.68.195:40154). Jan 17 12:33:12.336712 sshd[1851]: Accepted publickey for core from 139.178.68.195 port 40154 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:33:12.338814 sshd[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:33:12.345391 systemd-logind[1600]: New session 7 of user core. Jan 17 12:33:12.354687 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:33:12.949641 sshd[1851]: pam_unix(sshd:session): session closed for user core Jan 17 12:33:12.955801 systemd[1]: sshd@4-10.230.31.94:22-139.178.68.195:40154.service: Deactivated successfully. Jan 17 12:33:12.959992 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:33:12.961782 systemd-logind[1600]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:33:12.963179 systemd-logind[1600]: Removed session 7. Jan 17 12:33:13.098618 systemd[1]: Started sshd@5-10.230.31.94:22-139.178.68.195:40170.service - OpenSSH per-connection server daemon (139.178.68.195:40170). Jan 17 12:33:13.975446 sshd[1859]: Accepted publickey for core from 139.178.68.195 port 40170 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:33:13.977480 sshd[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:33:13.985145 systemd-logind[1600]: New session 8 of user core. Jan 17 12:33:13.992776 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:33:14.218058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:33:14.225738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:33:14.370498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:33:14.378844 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:33:14.491457 kubelet[1875]: E0117 12:33:14.490562 1875 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:33:14.492810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:33:14.493169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:33:14.592623 sshd[1859]: pam_unix(sshd:session): session closed for user core Jan 17 12:33:14.596475 systemd-logind[1600]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:33:14.597004 systemd[1]: sshd@5-10.230.31.94:22-139.178.68.195:40170.service: Deactivated successfully. Jan 17 12:33:14.600832 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:33:14.602375 systemd-logind[1600]: Removed session 8. Jan 17 12:33:14.744623 systemd[1]: Started sshd@6-10.230.31.94:22-139.178.68.195:40186.service - OpenSSH per-connection server daemon (139.178.68.195:40186). Jan 17 12:33:15.628783 sshd[1888]: Accepted publickey for core from 139.178.68.195 port 40186 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:33:15.630989 sshd[1888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:33:15.639120 systemd-logind[1600]: New session 9 of user core. Jan 17 12:33:15.649758 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:33:16.119595 sudo[1892]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:33:16.120143 sudo[1892]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:33:16.139793 sudo[1892]: pam_unix(sudo:session): session closed for user root Jan 17 12:33:16.284700 sshd[1888]: pam_unix(sshd:session): session closed for user core Jan 17 12:33:16.289001 systemd[1]: sshd@6-10.230.31.94:22-139.178.68.195:40186.service: Deactivated successfully. Jan 17 12:33:16.293860 systemd-logind[1600]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:33:16.294744 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:33:16.296202 systemd-logind[1600]: Removed session 9. Jan 17 12:33:16.431619 systemd[1]: Started sshd@7-10.230.31.94:22-139.178.68.195:33416.service - OpenSSH per-connection server daemon (139.178.68.195:33416). Jan 17 12:33:17.322712 sshd[1897]: Accepted publickey for core from 139.178.68.195 port 33416 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:33:17.325439 sshd[1897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:33:17.333730 systemd-logind[1600]: New session 10 of user core. Jan 17 12:33:17.342779 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:33:17.802300 sudo[1902]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:33:17.802801 sudo[1902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:33:17.810301 sudo[1902]: pam_unix(sudo:session): session closed for user root Jan 17 12:33:17.818753 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:33:17.819225 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:33:17.836602 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:33:17.847814 auditctl[1905]: No rules Jan 17 12:33:17.848422 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:33:17.848787 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:33:17.858107 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:33:17.894346 augenrules[1924]: No rules Jan 17 12:33:17.895894 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:33:17.898239 sudo[1901]: pam_unix(sudo:session): session closed for user root Jan 17 12:33:18.048368 sshd[1897]: pam_unix(sshd:session): session closed for user core Jan 17 12:33:18.055064 systemd[1]: sshd@7-10.230.31.94:22-139.178.68.195:33416.service: Deactivated successfully. Jan 17 12:33:18.056500 systemd-logind[1600]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:33:18.058359 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:33:18.059885 systemd-logind[1600]: Removed session 10. Jan 17 12:33:18.192685 systemd[1]: Started sshd@8-10.230.31.94:22-139.178.68.195:33432.service - OpenSSH per-connection server daemon (139.178.68.195:33432). Jan 17 12:33:19.086387 sshd[1933]: Accepted publickey for core from 139.178.68.195 port 33432 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:33:19.088567 sshd[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:33:19.096104 systemd-logind[1600]: New session 11 of user core. Jan 17 12:33:19.103826 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:33:19.563416 sudo[1937]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:33:19.563875 sudo[1937]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:33:20.048651 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:33:20.051883 (dockerd)[1953]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:33:20.491251 dockerd[1953]: time="2025-01-17T12:33:20.490688142Z" level=info msg="Starting up" Jan 17 12:33:20.750953 systemd[1]: var-lib-docker-metacopy\x2dcheck3226265625-merged.mount: Deactivated successfully. Jan 17 12:33:20.775914 dockerd[1953]: time="2025-01-17T12:33:20.775857002Z" level=info msg="Loading containers: start." Jan 17 12:33:20.925349 kernel: Initializing XFRM netlink socket Jan 17 12:33:21.027166 systemd-networkd[1263]: docker0: Link UP Jan 17 12:33:21.047310 dockerd[1953]: time="2025-01-17T12:33:21.047090569Z" level=info msg="Loading containers: done." Jan 17 12:33:21.066979 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3861537465-merged.mount: Deactivated successfully. Jan 17 12:33:21.068186 dockerd[1953]: time="2025-01-17T12:33:21.068145970Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:33:21.068672 dockerd[1953]: time="2025-01-17T12:33:21.068640833Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:33:21.069007 dockerd[1953]: time="2025-01-17T12:33:21.068979743Z" level=info msg="Daemon has completed initialization" Jan 17 12:33:21.137727 dockerd[1953]: time="2025-01-17T12:33:21.137634741Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:33:21.137944 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:33:22.225691 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:33:22.514861 containerd[1621]: time="2025-01-17T12:33:22.514630917Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:33:23.215097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177932233.mount: Deactivated successfully. Jan 17 12:33:24.718933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:33:24.731252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:33:24.989086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:33:25.004252 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:33:25.110582 kubelet[2172]: E0117 12:33:25.109912 2172 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:33:25.114597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:33:25.115008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:33:26.085243 containerd[1621]: time="2025-01-17T12:33:26.085083483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:26.088796 containerd[1621]: time="2025-01-17T12:33:26.088670849Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140738" Jan 17 12:33:26.090129 containerd[1621]: time="2025-01-17T12:33:26.090047730Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:26.095469 containerd[1621]: time="2025-01-17T12:33:26.095395700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:26.097781 containerd[1621]: time="2025-01-17T12:33:26.097213079Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 3.582456741s" Jan 17 12:33:26.097781 containerd[1621]: time="2025-01-17T12:33:26.097323460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:33:26.132594 containerd[1621]: time="2025-01-17T12:33:26.132537126Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:33:28.711315 containerd[1621]: time="2025-01-17T12:33:28.711210996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:28.712877 containerd[1621]: time="2025-01-17T12:33:28.712798840Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216649" Jan 17 12:33:28.713878 containerd[1621]: time="2025-01-17T12:33:28.713789843Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:28.719299 containerd[1621]: time="2025-01-17T12:33:28.718391927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:28.720474 containerd[1621]: time="2025-01-17T12:33:28.720431722Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 2.587838366s" Jan 17 12:33:28.720566 containerd[1621]: time="2025-01-17T12:33:28.720492205Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:33:28.759154 containerd[1621]: time="2025-01-17T12:33:28.759107664Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:33:30.370332 containerd[1621]: time="2025-01-17T12:33:30.370144764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:30.372148 containerd[1621]: time="2025-01-17T12:33:30.372087285Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332849" Jan 17 12:33:30.372973 containerd[1621]: time="2025-01-17T12:33:30.372882704Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:30.378550 containerd[1621]: time="2025-01-17T12:33:30.378466830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:30.380497 containerd[1621]: time="2025-01-17T12:33:30.380251068Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.620825697s" Jan 17 12:33:30.380497 containerd[1621]: time="2025-01-17T12:33:30.380337199Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:33:30.419797 containerd[1621]: time="2025-01-17T12:33:30.419466643Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:33:31.941924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1040165890.mount: Deactivated successfully. Jan 17 12:33:32.539147 containerd[1621]: time="2025-01-17T12:33:32.538513146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:32.540327 containerd[1621]: time="2025-01-17T12:33:32.539759516Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620949" Jan 17 12:33:32.541006 containerd[1621]: time="2025-01-17T12:33:32.540603394Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:32.543506 containerd[1621]: time="2025-01-17T12:33:32.543433430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:32.544981 containerd[1621]: time="2025-01-17T12:33:32.544608814Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 2.125080551s" Jan 17 12:33:32.544981 containerd[1621]: time="2025-01-17T12:33:32.544689110Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:33:32.576558 containerd[1621]: time="2025-01-17T12:33:32.576445721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:33:33.241631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575648613.mount: Deactivated successfully. Jan 17 12:33:34.391958 containerd[1621]: time="2025-01-17T12:33:34.391694737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:34.393804 containerd[1621]: time="2025-01-17T12:33:34.393303161Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 17 12:33:34.394689 containerd[1621]: time="2025-01-17T12:33:34.394627301Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:34.398643 containerd[1621]: time="2025-01-17T12:33:34.398608318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:34.400488 containerd[1621]: time="2025-01-17T12:33:34.400256345Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.823752797s" Jan 17 12:33:34.400488 containerd[1621]: time="2025-01-17T12:33:34.400319420Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:33:34.432687 containerd[1621]: time="2025-01-17T12:33:34.432636964Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:33:34.978545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount138153699.mount: Deactivated successfully. Jan 17 12:33:34.983066 containerd[1621]: time="2025-01-17T12:33:34.983008288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:34.984197 containerd[1621]: time="2025-01-17T12:33:34.984133530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 17 12:33:34.984513 containerd[1621]: time="2025-01-17T12:33:34.984455457Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:34.987535 containerd[1621]: time="2025-01-17T12:33:34.987459747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:34.988863 containerd[1621]: time="2025-01-17T12:33:34.988687613Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 555.761895ms" Jan 17 12:33:34.988863 containerd[1621]: time="2025-01-17T12:33:34.988733652Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:33:35.022322 containerd[1621]: time="2025-01-17T12:33:35.022258497Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:33:35.217945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 12:33:35.234709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:33:35.543625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:33:35.560095 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:33:35.649528 kubelet[2285]: E0117 12:33:35.649420 2285 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:33:35.652784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:33:35.653217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:33:35.801789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269014066.mount: Deactivated successfully. Jan 17 12:33:37.134312 update_engine[1605]: I20250117 12:33:37.133558 1605 update_attempter.cc:509] Updating boot flags... Jan 17 12:33:37.237094 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2345) Jan 17 12:33:37.340776 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2345) Jan 17 12:33:37.427346 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2345) Jan 17 12:33:38.637194 containerd[1621]: time="2025-01-17T12:33:38.637068157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:38.639096 containerd[1621]: time="2025-01-17T12:33:38.639043427Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 17 12:33:38.639780 containerd[1621]: time="2025-01-17T12:33:38.639347472Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:38.643800 containerd[1621]: time="2025-01-17T12:33:38.643728225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:33:38.646086 containerd[1621]: time="2025-01-17T12:33:38.645611735Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.623276579s" Jan 17 12:33:38.646086 containerd[1621]: time="2025-01-17T12:33:38.645709278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:33:43.679113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:33:43.691112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:33:43.722930 systemd[1]: Reloading requested from client PID 2419 ('systemctl') (unit session-11.scope)... Jan 17 12:33:43.722970 systemd[1]: Reloading... Jan 17 12:33:43.941356 zram_generator::config[2461]: No configuration found. Jan 17 12:33:44.086454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:33:44.191111 systemd[1]: Reloading finished in 467 ms. Jan 17 12:33:44.270526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:33:44.275575 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:33:44.279616 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:33:44.280036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:33:44.290763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:33:44.454604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:33:44.476232 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:33:44.570404 kubelet[2539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:33:44.570404 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:33:44.570404 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:33:44.571392 kubelet[2539]: I0117 12:33:44.570533 2539 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:33:45.308147 kubelet[2539]: I0117 12:33:45.308073 2539 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:33:45.308147 kubelet[2539]: I0117 12:33:45.308138 2539 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:33:45.309336 kubelet[2539]: I0117 12:33:45.308674 2539 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:33:45.338329 kubelet[2539]: I0117 12:33:45.338265 2539 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:33:45.340439 kubelet[2539]: E0117 12:33:45.340247 2539 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.31.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:45.358420 kubelet[2539]: I0117 12:33:45.358378 2539 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:33:45.361425 kubelet[2539]: I0117 12:33:45.361133 2539 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:33:45.362426 kubelet[2539]: I0117 12:33:45.362362 2539 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:33:45.362707 kubelet[2539]: I0117 12:33:45.362453 2539 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:33:45.362707 kubelet[2539]: I0117 12:33:45.362473 2539 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:33:45.362852 kubelet[2539]: I0117 12:33:45.362743 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:33:45.362993 kubelet[2539]: I0117 12:33:45.362972 2539 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:33:45.363071 kubelet[2539]: I0117 12:33:45.363011 2539 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:33:45.363113 kubelet[2539]: I0117 12:33:45.363071 2539 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:33:45.363113 kubelet[2539]: I0117 12:33:45.363102 2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:33:45.366520 kubelet[2539]: W0117 12:33:45.365969 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.31.94:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:45.366520 kubelet[2539]: E0117 12:33:45.366053 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.31.94:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:45.366520 kubelet[2539]: W0117 12:33:45.366441 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.31.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hkhka.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:45.366520 kubelet[2539]: E0117 12:33:45.366494 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.31.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hkhka.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:45.367705 kubelet[2539]: I0117 12:33:45.367253 2539 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:33:45.372092 kubelet[2539]: I0117 12:33:45.371921 2539 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:33:45.376537 kubelet[2539]: W0117 12:33:45.376513 2539 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:33:45.378203 kubelet[2539]: I0117 12:33:45.377975 2539 server.go:1256] "Started kubelet" Jan 17 12:33:45.378510 kubelet[2539]: I0117 12:33:45.378486 2539 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:33:45.380133 kubelet[2539]: I0117 12:33:45.379955 2539 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:33:45.385355 kubelet[2539]: I0117 12:33:45.384614 2539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:33:45.385355 kubelet[2539]: I0117 12:33:45.385085 2539 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:33:45.390185 kubelet[2539]: I0117 12:33:45.389662 2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:33:45.390486 kubelet[2539]: E0117 12:33:45.390455 2539 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.31.94:6443/api/v1/namespaces/default/events\": dial tcp 10.230.31.94:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-hkhka.gb1.brightbox.com.181b7ae9afdd41e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-hkhka.gb1.brightbox.com,UID:srv-hkhka.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-hkhka.gb1.brightbox.com,},FirstTimestamp:2025-01-17 12:33:45.377939936 +0000 UTC m=+0.894231001,LastTimestamp:2025-01-17 12:33:45.377939936 +0000 UTC m=+0.894231001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-hkhka.gb1.brightbox.com,}" Jan 17 12:33:45.398313 kubelet[2539]: E0117 12:33:45.395916 2539 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-hkhka.gb1.brightbox.com\" not found" Jan 17 12:33:45.398313 kubelet[2539]: I0117 12:33:45.396016 2539 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:33:45.398313 kubelet[2539]: I0117 12:33:45.396166 2539 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:33:45.398313 kubelet[2539]: I0117 12:33:45.396322 2539 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:33:45.398313 kubelet[2539]: W0117 12:33:45.396939 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.31.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:45.398313 kubelet[2539]: E0117 12:33:45.396999 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.31.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:45.402252 kubelet[2539]: E0117 12:33:45.402211 2539 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hkhka.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.94:6443: connect: connection refused" interval="200ms" Jan 17 12:33:45.403763 kubelet[2539]: E0117 12:33:45.403738 2539 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:33:45.406246 kubelet[2539]: I0117 12:33:45.406215 2539 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:33:45.406539 kubelet[2539]: I0117 12:33:45.406511 2539 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:33:45.409486 kubelet[2539]: I0117 12:33:45.409466 2539 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:33:45.428897 kubelet[2539]: I0117 12:33:45.428761 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:33:45.430382 kubelet[2539]: I0117 12:33:45.430350 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:33:45.430451 kubelet[2539]: I0117 12:33:45.430420 2539 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:33:45.430550 kubelet[2539]: I0117 12:33:45.430463 2539 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:33:45.430629 kubelet[2539]: E0117 12:33:45.430586 2539 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:33:45.450981 kubelet[2539]: W0117 12:33:45.450526 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.31.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:45.450981 kubelet[2539]: E0117 12:33:45.450595 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.31.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:45.472824 kubelet[2539]: I0117 12:33:45.472213 2539 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:33:45.472824 kubelet[2539]: I0117 12:33:45.472254 2539 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:33:45.472824 kubelet[2539]: I0117 12:33:45.472318 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:33:45.474738 kubelet[2539]: I0117 12:33:45.474714 2539 policy_none.go:49] "None policy: Start" Jan 17 12:33:45.475697 kubelet[2539]: I0117 12:33:45.475665 2539 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:33:45.475766 kubelet[2539]: I0117 12:33:45.475721 2539 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:33:45.486498 kubelet[2539]: I0117 12:33:45.486441 2539 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:33:45.487739 kubelet[2539]: I0117 12:33:45.487675 2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:33:45.492535 kubelet[2539]: E0117 12:33:45.492494 2539 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-hkhka.gb1.brightbox.com\" not found" Jan 17 12:33:45.500576 kubelet[2539]: I0117 12:33:45.500445 2539 kubelet_node_status.go:73] "Attempting to register node" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.501179 kubelet[2539]: E0117 12:33:45.501155 2539 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.31.94:6443/api/v1/nodes\": dial tcp 10.230.31.94:6443: connect: connection refused" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.531644 kubelet[2539]: I0117 12:33:45.531452 2539 topology_manager.go:215] "Topology Admit Handler" podUID="57d385351350e2901b8d36f5f77afaa6" podNamespace="kube-system" podName="kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.535112 kubelet[2539]: I0117 12:33:45.534903 2539 topology_manager.go:215] "Topology Admit Handler" podUID="024232ec97858f84d6c9aa0518d2ea05" podNamespace="kube-system" podName="kube-scheduler-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.537699 kubelet[2539]: I0117 12:33:45.537602 2539 topology_manager.go:215] "Topology Admit Handler" podUID="7a7724ac6a1a444fdd1022827fb711f1" podNamespace="kube-system" podName="kube-apiserver-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.598323 kubelet[2539]: I0117 12:33:45.598031 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57d385351350e2901b8d36f5f77afaa6-kubeconfig\") pod \"kube-controller-manager-srv-hkhka.gb1.brightbox.com\" (UID: \"57d385351350e2901b8d36f5f77afaa6\") " pod="kube-system/kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.598323 kubelet[2539]: I0117 12:33:45.598164 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57d385351350e2901b8d36f5f77afaa6-ca-certs\") pod \"kube-controller-manager-srv-hkhka.gb1.brightbox.com\" (UID: \"57d385351350e2901b8d36f5f77afaa6\") " pod="kube-system/kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.598323 kubelet[2539]: I0117 12:33:45.598248 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/57d385351350e2901b8d36f5f77afaa6-flexvolume-dir\") pod \"kube-controller-manager-srv-hkhka.gb1.brightbox.com\" (UID: \"57d385351350e2901b8d36f5f77afaa6\") " pod="kube-system/kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.598323 kubelet[2539]: I0117 12:33:45.598292 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57d385351350e2901b8d36f5f77afaa6-k8s-certs\") pod \"kube-controller-manager-srv-hkhka.gb1.brightbox.com\" (UID: \"57d385351350e2901b8d36f5f77afaa6\") " pod="kube-system/kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.599800 kubelet[2539]: I0117 12:33:45.598340 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a7724ac6a1a444fdd1022827fb711f1-k8s-certs\") pod \"kube-apiserver-srv-hkhka.gb1.brightbox.com\" (UID: \"7a7724ac6a1a444fdd1022827fb711f1\") " pod="kube-system/kube-apiserver-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.599800 kubelet[2539]: I0117 12:33:45.598382 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a7724ac6a1a444fdd1022827fb711f1-usr-share-ca-certificates\") pod \"kube-apiserver-srv-hkhka.gb1.brightbox.com\" (UID: \"7a7724ac6a1a444fdd1022827fb711f1\") " pod="kube-system/kube-apiserver-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.599800 kubelet[2539]: I0117 12:33:45.598415 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57d385351350e2901b8d36f5f77afaa6-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-hkhka.gb1.brightbox.com\" (UID: \"57d385351350e2901b8d36f5f77afaa6\") " pod="kube-system/kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.599800 kubelet[2539]: I0117 12:33:45.598446 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/024232ec97858f84d6c9aa0518d2ea05-kubeconfig\") pod \"kube-scheduler-srv-hkhka.gb1.brightbox.com\" (UID: \"024232ec97858f84d6c9aa0518d2ea05\") " pod="kube-system/kube-scheduler-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.599800 kubelet[2539]: I0117 12:33:45.598488 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a7724ac6a1a444fdd1022827fb711f1-ca-certs\") pod \"kube-apiserver-srv-hkhka.gb1.brightbox.com\" (UID: \"7a7724ac6a1a444fdd1022827fb711f1\") " pod="kube-system/kube-apiserver-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.603241 kubelet[2539]: E0117 12:33:45.603207 2539 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hkhka.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.94:6443: connect: connection refused" interval="400ms" Jan 17 12:33:45.706053 kubelet[2539]: I0117 12:33:45.705421 2539 kubelet_node_status.go:73] "Attempting to register node" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.706053 kubelet[2539]: E0117 12:33:45.706010 2539 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.31.94:6443/api/v1/nodes\": dial tcp 10.230.31.94:6443: connect: connection refused" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:45.846762 containerd[1621]: time="2025-01-17T12:33:45.846642419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-hkhka.gb1.brightbox.com,Uid:57d385351350e2901b8d36f5f77afaa6,Namespace:kube-system,Attempt:0,}" Jan 17 12:33:45.852323 containerd[1621]: time="2025-01-17T12:33:45.852185071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-hkhka.gb1.brightbox.com,Uid:024232ec97858f84d6c9aa0518d2ea05,Namespace:kube-system,Attempt:0,}" Jan 17 12:33:45.855030 containerd[1621]: time="2025-01-17T12:33:45.854745715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-hkhka.gb1.brightbox.com,Uid:7a7724ac6a1a444fdd1022827fb711f1,Namespace:kube-system,Attempt:0,}" Jan 17 12:33:46.004528 kubelet[2539]: E0117 12:33:46.004363 2539 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hkhka.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.94:6443: connect: connection refused" interval="800ms" Jan 17 12:33:46.111150 kubelet[2539]: I0117 12:33:46.110954 2539 kubelet_node_status.go:73] "Attempting to register node" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:46.111789 kubelet[2539]: E0117 12:33:46.111765 2539 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.31.94:6443/api/v1/nodes\": dial tcp 10.230.31.94:6443: connect: connection refused" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:46.273700 kubelet[2539]: W0117 12:33:46.273575 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.31.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:46.273700 kubelet[2539]: E0117 12:33:46.273658 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.31.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:46.360064 kubelet[2539]: W0117 12:33:46.357501 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.31.94:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:46.360064 kubelet[2539]: E0117 12:33:46.360079 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.31.94:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:46.404922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759841281.mount: Deactivated successfully. Jan 17 12:33:46.410563 containerd[1621]: time="2025-01-17T12:33:46.410471441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:33:46.413194 containerd[1621]: time="2025-01-17T12:33:46.413098163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:33:46.414651 containerd[1621]: time="2025-01-17T12:33:46.414421788Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:33:46.416691 containerd[1621]: time="2025-01-17T12:33:46.416597394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 12:33:46.416691 containerd[1621]: time="2025-01-17T12:33:46.416695465Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:33:46.418034 containerd[1621]: time="2025-01-17T12:33:46.417952547Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:33:46.418800 containerd[1621]: time="2025-01-17T12:33:46.418705067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:33:46.421540 containerd[1621]: time="2025-01-17T12:33:46.421445880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:33:46.426311 containerd[1621]: time="2025-01-17T12:33:46.425572956Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 573.278463ms" Jan 17 12:33:46.428433 containerd[1621]: time="2025-01-17T12:33:46.428385332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 573.567241ms" Jan 17 12:33:46.433214 containerd[1621]: time="2025-01-17T12:33:46.433137693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 586.250535ms" Jan 17 12:33:46.637402 containerd[1621]: time="2025-01-17T12:33:46.637210443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:33:46.637772 containerd[1621]: time="2025-01-17T12:33:46.637352421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:33:46.637772 containerd[1621]: time="2025-01-17T12:33:46.637411260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:33:46.638045 containerd[1621]: time="2025-01-17T12:33:46.637685895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:33:46.644858 containerd[1621]: time="2025-01-17T12:33:46.644517239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:33:46.647124 kubelet[2539]: W0117 12:33:46.647012 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.31.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hkhka.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:46.647124 kubelet[2539]: E0117 12:33:46.647095 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.31.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hkhka.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:46.648376 containerd[1621]: time="2025-01-17T12:33:46.648051054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:33:46.648376 containerd[1621]: time="2025-01-17T12:33:46.648085628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:33:46.648376 containerd[1621]: time="2025-01-17T12:33:46.648227631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:33:46.651260 containerd[1621]: time="2025-01-17T12:33:46.650402357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:33:46.653316 containerd[1621]: time="2025-01-17T12:33:46.652758936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:33:46.653316 containerd[1621]: time="2025-01-17T12:33:46.652792034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:33:46.653316 containerd[1621]: time="2025-01-17T12:33:46.652915265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:33:46.781565 containerd[1621]: time="2025-01-17T12:33:46.781515035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-hkhka.gb1.brightbox.com,Uid:024232ec97858f84d6c9aa0518d2ea05,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0dc5e46fb0c8a416042f6d8778d31d888ee98efd5b4ae057251597db9276133\"" Jan 17 12:33:46.791128 kubelet[2539]: W0117 12:33:46.790839 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.31.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:46.791370 kubelet[2539]: E0117 12:33:46.791281 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.31.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:46.793512 containerd[1621]: time="2025-01-17T12:33:46.793157967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-hkhka.gb1.brightbox.com,Uid:7a7724ac6a1a444fdd1022827fb711f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"efcd4394ab9fdda9dc118d28ec77f3b9c744688d751869e330a8d71394141c06\"" Jan 17 12:33:46.800518 containerd[1621]: time="2025-01-17T12:33:46.800454133Z" level=info msg="CreateContainer within sandbox \"efcd4394ab9fdda9dc118d28ec77f3b9c744688d751869e330a8d71394141c06\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:33:46.802047 containerd[1621]: time="2025-01-17T12:33:46.801904734Z" level=info msg="CreateContainer within sandbox \"f0dc5e46fb0c8a416042f6d8778d31d888ee98efd5b4ae057251597db9276133\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:33:46.805450 kubelet[2539]: E0117 12:33:46.805393 2539 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hkhka.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.94:6443: connect: connection refused" interval="1.6s" Jan 17 12:33:46.812483 containerd[1621]: time="2025-01-17T12:33:46.812443250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-hkhka.gb1.brightbox.com,Uid:57d385351350e2901b8d36f5f77afaa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2594b74aee618102eea3cb9b944df319513a1892b1e83df9409ed6748c6478d7\"" Jan 17 12:33:46.816529 containerd[1621]: time="2025-01-17T12:33:46.816487238Z" level=info msg="CreateContainer within sandbox \"2594b74aee618102eea3cb9b944df319513a1892b1e83df9409ed6748c6478d7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:33:46.829334 containerd[1621]: time="2025-01-17T12:33:46.829254164Z" level=info msg="CreateContainer within sandbox \"f0dc5e46fb0c8a416042f6d8778d31d888ee98efd5b4ae057251597db9276133\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9b4a5b984566e4c2605aac7146566d9f78ea7ef98d0b9d7704b458cbaeacc63d\"" Jan 17 12:33:46.830217 containerd[1621]: time="2025-01-17T12:33:46.830117686Z" level=info msg="CreateContainer within sandbox \"efcd4394ab9fdda9dc118d28ec77f3b9c744688d751869e330a8d71394141c06\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d24457086172feecdf3019a1a5cd53f6c9e3d9501f62c47cdf4e7218c1f3a3e0\"" Jan 17 12:33:46.830371 containerd[1621]: time="2025-01-17T12:33:46.830190388Z" level=info msg="StartContainer for \"9b4a5b984566e4c2605aac7146566d9f78ea7ef98d0b9d7704b458cbaeacc63d\"" Jan 17 12:33:46.830971 containerd[1621]: time="2025-01-17T12:33:46.830906746Z" level=info msg="StartContainer for \"d24457086172feecdf3019a1a5cd53f6c9e3d9501f62c47cdf4e7218c1f3a3e0\"" Jan 17 12:33:46.836876 containerd[1621]: time="2025-01-17T12:33:46.836833017Z" level=info msg="CreateContainer within sandbox \"2594b74aee618102eea3cb9b944df319513a1892b1e83df9409ed6748c6478d7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"46aa193be9c65227138230385c0f3ee339a4a75ac59c7a042c86fff0c07399b0\"" Jan 17 12:33:46.837430 containerd[1621]: time="2025-01-17T12:33:46.837358093Z" level=info msg="StartContainer for \"46aa193be9c65227138230385c0f3ee339a4a75ac59c7a042c86fff0c07399b0\"" Jan 17 12:33:46.921994 kubelet[2539]: I0117 12:33:46.921692 2539 kubelet_node_status.go:73] "Attempting to register node" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:46.923626 kubelet[2539]: E0117 12:33:46.923354 2539 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.31.94:6443/api/v1/nodes\": dial tcp 10.230.31.94:6443: connect: connection refused" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:46.973714 containerd[1621]: time="2025-01-17T12:33:46.973664404Z" level=info msg="StartContainer for \"d24457086172feecdf3019a1a5cd53f6c9e3d9501f62c47cdf4e7218c1f3a3e0\" returns successfully" Jan 17 12:33:47.013412 containerd[1621]: time="2025-01-17T12:33:47.012739179Z" level=info msg="StartContainer for \"46aa193be9c65227138230385c0f3ee339a4a75ac59c7a042c86fff0c07399b0\" returns successfully" Jan 17 12:33:47.015265 containerd[1621]: time="2025-01-17T12:33:47.015019934Z" level=info msg="StartContainer for \"9b4a5b984566e4c2605aac7146566d9f78ea7ef98d0b9d7704b458cbaeacc63d\" returns successfully" Jan 17 12:33:47.441342 kubelet[2539]: E0117 12:33:47.439609 2539 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.31.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.31.94:6443: connect: connection refused Jan 17 12:33:48.528262 kubelet[2539]: I0117 12:33:48.527975 2539 kubelet_node_status.go:73] "Attempting to register node" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:49.978042 kubelet[2539]: E0117 12:33:49.977957 2539 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-hkhka.gb1.brightbox.com\" not found" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:50.038342 kubelet[2539]: I0117 12:33:50.036497 2539 kubelet_node_status.go:76] "Successfully registered node" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:50.084617 kubelet[2539]: E0117 12:33:50.084338 2539 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-hkhka.gb1.brightbox.com.181b7ae9afdd41e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-hkhka.gb1.brightbox.com,UID:srv-hkhka.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-hkhka.gb1.brightbox.com,},FirstTimestamp:2025-01-17 12:33:45.377939936 +0000 UTC m=+0.894231001,LastTimestamp:2025-01-17 12:33:45.377939936 +0000 UTC m=+0.894231001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-hkhka.gb1.brightbox.com,}" Jan 17 12:33:50.369157 kubelet[2539]: I0117 12:33:50.367179 2539 apiserver.go:52] "Watching apiserver" Jan 17 12:33:50.397375 kubelet[2539]: I0117 12:33:50.397260 2539 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:33:52.667824 systemd[1]: Reloading requested from client PID 2816 ('systemctl') (unit session-11.scope)... Jan 17 12:33:52.668435 systemd[1]: Reloading... Jan 17 12:33:52.783666 zram_generator::config[2855]: No configuration found. Jan 17 12:33:53.002263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:33:53.120814 systemd[1]: Reloading finished in 451 ms. Jan 17 12:33:53.171945 kubelet[2539]: I0117 12:33:53.171847 2539 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:33:53.172272 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:33:53.180861 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:33:53.181502 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:33:53.190804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:33:53.399516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:33:53.413183 (kubelet)[2929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:33:53.518868 kubelet[2929]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:33:53.518868 kubelet[2929]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:33:53.518868 kubelet[2929]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:33:53.518868 kubelet[2929]: I0117 12:33:53.518450 2929 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:33:53.547726 kubelet[2929]: I0117 12:33:53.547000 2929 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:33:53.547726 kubelet[2929]: I0117 12:33:53.547032 2929 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:33:53.547726 kubelet[2929]: I0117 12:33:53.547520 2929 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:33:53.550069 kubelet[2929]: I0117 12:33:53.549940 2929 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:33:53.568165 kubelet[2929]: I0117 12:33:53.567756 2929 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:33:53.585136 kubelet[2929]: I0117 12:33:53.584356 2929 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:33:53.585136 kubelet[2929]: I0117 12:33:53.585101 2929 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:33:53.585520 kubelet[2929]: I0117 12:33:53.585391 2929 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:33:53.585520 kubelet[2929]: I0117 12:33:53.585462 2929 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:33:53.585520 kubelet[2929]: I0117 12:33:53.585493 2929 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:33:53.586615 kubelet[2929]: I0117 12:33:53.585588 2929 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:33:53.586615 kubelet[2929]: I0117 12:33:53.585795 2929 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:33:53.586964 kubelet[2929]: I0117 12:33:53.586836 2929 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:33:53.586964 kubelet[2929]: I0117 12:33:53.586907 2929 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:33:53.586964 kubelet[2929]: I0117 12:33:53.586934 2929 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:33:53.593317 kubelet[2929]: I0117 12:33:53.590820 2929 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:33:53.593317 kubelet[2929]: I0117 12:33:53.591233 2929 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:33:53.598816 kubelet[2929]: I0117 12:33:53.598794 2929 server.go:1256] "Started kubelet" Jan 17 12:33:53.611122 kubelet[2929]: I0117 12:33:53.610631 2929 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:33:53.628854 kubelet[2929]: I0117 12:33:53.624404 2929 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:33:53.628854 kubelet[2929]: I0117 12:33:53.625759 2929 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:33:53.631628 kubelet[2929]: I0117 12:33:53.631593 2929 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:33:53.632276 kubelet[2929]: I0117 12:33:53.632256 2929 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:33:53.636068 kubelet[2929]: I0117 12:33:53.636027 2929 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:33:53.646571 kubelet[2929]: I0117 12:33:53.646539 2929 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:33:53.647095 kubelet[2929]: I0117 12:33:53.647066 2929 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:33:53.655489 kubelet[2929]: I0117 12:33:53.653031 2929 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:33:53.655489 kubelet[2929]: I0117 12:33:53.653162 2929 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:33:53.655489 kubelet[2929]: I0117 12:33:53.655101 2929 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:33:53.666433 kubelet[2929]: I0117 12:33:53.665423 2929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:33:53.671084 kubelet[2929]: E0117 12:33:53.665835 2929 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:33:53.673264 kubelet[2929]: I0117 12:33:53.672910 2929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:33:53.675080 kubelet[2929]: I0117 12:33:53.675056 2929 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:33:53.679936 kubelet[2929]: I0117 12:33:53.679910 2929 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:33:53.681052 kubelet[2929]: E0117 12:33:53.681029 2929 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:33:53.762312 kubelet[2929]: I0117 12:33:53.759588 2929 kubelet_node_status.go:73] "Attempting to register node" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.773892 kubelet[2929]: I0117 12:33:53.773865 2929 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:33:53.773892 kubelet[2929]: I0117 12:33:53.773894 2929 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:33:53.774075 kubelet[2929]: I0117 12:33:53.773929 2929 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:33:53.774186 kubelet[2929]: I0117 12:33:53.774163 2929 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:33:53.774286 kubelet[2929]: I0117 12:33:53.774222 2929 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:33:53.774286 kubelet[2929]: I0117 12:33:53.774245 2929 policy_none.go:49] "None policy: Start" Jan 17 12:33:53.775428 kubelet[2929]: I0117 12:33:53.775405 2929 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:33:53.775533 kubelet[2929]: I0117 12:33:53.775457 2929 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:33:53.775712 kubelet[2929]: I0117 12:33:53.775665 2929 state_mem.go:75] "Updated machine memory state" Jan 17 12:33:53.781618 kubelet[2929]: I0117 12:33:53.780323 2929 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:33:53.781618 kubelet[2929]: I0117 12:33:53.780802 2929 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:33:53.782830 kubelet[2929]: I0117 12:33:53.782597 2929 topology_manager.go:215] "Topology Admit Handler" podUID="7a7724ac6a1a444fdd1022827fb711f1" podNamespace="kube-system" podName="kube-apiserver-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.782830 kubelet[2929]: I0117 12:33:53.782766 2929 topology_manager.go:215] "Topology Admit Handler" podUID="57d385351350e2901b8d36f5f77afaa6" podNamespace="kube-system" podName="kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.784102 kubelet[2929]: I0117 12:33:53.783473 2929 kubelet_node_status.go:112] "Node was previously registered" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.784102 kubelet[2929]: I0117 12:33:53.783565 2929 kubelet_node_status.go:76] "Successfully registered node" node="srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.787054 kubelet[2929]: I0117 12:33:53.785704 2929 topology_manager.go:215] "Topology Admit Handler" podUID="024232ec97858f84d6c9aa0518d2ea05" podNamespace="kube-system" podName="kube-scheduler-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.806734 kubelet[2929]: W0117 12:33:53.804161 2929 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:33:53.809147 kubelet[2929]: W0117 12:33:53.808745 2929 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:33:53.810590 kubelet[2929]: W0117 12:33:53.808793 2929 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:33:53.849593 kubelet[2929]: I0117 12:33:53.849526 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a7724ac6a1a444fdd1022827fb711f1-ca-certs\") pod \"kube-apiserver-srv-hkhka.gb1.brightbox.com\" (UID: \"7a7724ac6a1a444fdd1022827fb711f1\") " pod="kube-system/kube-apiserver-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.849974 kubelet[2929]: I0117 12:33:53.849944 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a7724ac6a1a444fdd1022827fb711f1-usr-share-ca-certificates\") pod \"kube-apiserver-srv-hkhka.gb1.brightbox.com\" (UID: \"7a7724ac6a1a444fdd1022827fb711f1\") " pod="kube-system/kube-apiserver-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.850149 kubelet[2929]: I0117 12:33:53.850125 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57d385351350e2901b8d36f5f77afaa6-k8s-certs\") pod \"kube-controller-manager-srv-hkhka.gb1.brightbox.com\" (UID: \"57d385351350e2901b8d36f5f77afaa6\") " pod="kube-system/kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.850325 kubelet[2929]: I0117 12:33:53.850291 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57d385351350e2901b8d36f5f77afaa6-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-hkhka.gb1.brightbox.com\" (UID: \"57d385351350e2901b8d36f5f77afaa6\") " pod="kube-system/kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.850481 kubelet[2929]: I0117 12:33:53.850462 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/024232ec97858f84d6c9aa0518d2ea05-kubeconfig\") pod \"kube-scheduler-srv-hkhka.gb1.brightbox.com\" (UID: \"024232ec97858f84d6c9aa0518d2ea05\") " pod="kube-system/kube-scheduler-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.850622 kubelet[2929]: I0117 12:33:53.850604 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a7724ac6a1a444fdd1022827fb711f1-k8s-certs\") pod \"kube-apiserver-srv-hkhka.gb1.brightbox.com\" (UID: \"7a7724ac6a1a444fdd1022827fb711f1\") " pod="kube-system/kube-apiserver-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.850825 kubelet[2929]: I0117 12:33:53.850806 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57d385351350e2901b8d36f5f77afaa6-ca-certs\") pod \"kube-controller-manager-srv-hkhka.gb1.brightbox.com\" (UID: \"57d385351350e2901b8d36f5f77afaa6\") " pod="kube-system/kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.851132 kubelet[2929]: I0117 12:33:53.850980 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/57d385351350e2901b8d36f5f77afaa6-flexvolume-dir\") pod \"kube-controller-manager-srv-hkhka.gb1.brightbox.com\" (UID: \"57d385351350e2901b8d36f5f77afaa6\") " pod="kube-system/kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:53.851132 kubelet[2929]: I0117 12:33:53.851077 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57d385351350e2901b8d36f5f77afaa6-kubeconfig\") pod \"kube-controller-manager-srv-hkhka.gb1.brightbox.com\" (UID: \"57d385351350e2901b8d36f5f77afaa6\") " pod="kube-system/kube-controller-manager-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:54.606182 kubelet[2929]: I0117 12:33:54.605930 2929 apiserver.go:52] "Watching apiserver" Jan 17 12:33:54.648514 kubelet[2929]: I0117 12:33:54.648413 2929 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:33:54.735223 kubelet[2929]: W0117 12:33:54.733495 2929 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:33:54.735223 kubelet[2929]: E0117 12:33:54.733610 2929 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-hkhka.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-hkhka.gb1.brightbox.com" Jan 17 12:33:54.843262 kubelet[2929]: I0117 12:33:54.842970 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-hkhka.gb1.brightbox.com" podStartSLOduration=1.842873439 podStartE2EDuration="1.842873439s" podCreationTimestamp="2025-01-17 12:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:33:54.84255586 +0000 UTC m=+1.420406524" watchObservedRunningTime="2025-01-17 12:33:54.842873439 +0000 UTC m=+1.420724102" Jan 17 12:33:54.879183 kubelet[2929]: I0117 12:33:54.878652 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-hkhka.gb1.brightbox.com" podStartSLOduration=1.878600097 podStartE2EDuration="1.878600097s" podCreationTimestamp="2025-01-17 12:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:33:54.863794062 +0000 UTC m=+1.441644742" watchObservedRunningTime="2025-01-17 12:33:54.878600097 +0000 UTC m=+1.456450764" Jan 17 12:33:54.912055 kubelet[2929]: I0117 12:33:54.911887 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-hkhka.gb1.brightbox.com" podStartSLOduration=1.91182855 podStartE2EDuration="1.91182855s" podCreationTimestamp="2025-01-17 12:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:33:54.878858259 +0000 UTC m=+1.456708914" watchObservedRunningTime="2025-01-17 12:33:54.91182855 +0000 UTC m=+1.489679214" Jan 17 12:33:59.446063 sudo[1937]: pam_unix(sudo:session): session closed for user root Jan 17 12:33:59.592463 sshd[1933]: pam_unix(sshd:session): session closed for user core Jan 17 12:33:59.601030 systemd[1]: sshd@8-10.230.31.94:22-139.178.68.195:33432.service: Deactivated successfully. Jan 17 12:33:59.606400 systemd-logind[1600]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:33:59.606885 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:33:59.609488 systemd-logind[1600]: Removed session 11. Jan 17 12:34:05.204146 kubelet[2929]: I0117 12:34:05.203862 2929 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:34:05.206414 kubelet[2929]: I0117 12:34:05.205527 2929 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:34:05.206526 containerd[1621]: time="2025-01-17T12:34:05.205125049Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:34:06.001694 kubelet[2929]: I0117 12:34:06.000593 2929 topology_manager.go:215] "Topology Admit Handler" podUID="958d2a60-2f26-4e6e-bec6-488b7492834b" podNamespace="kube-system" podName="kube-proxy-dvx9r" Jan 17 12:34:06.130187 kubelet[2929]: I0117 12:34:06.129905 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/958d2a60-2f26-4e6e-bec6-488b7492834b-kube-proxy\") pod \"kube-proxy-dvx9r\" (UID: \"958d2a60-2f26-4e6e-bec6-488b7492834b\") " pod="kube-system/kube-proxy-dvx9r" Jan 17 12:34:06.130187 kubelet[2929]: I0117 12:34:06.129985 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/958d2a60-2f26-4e6e-bec6-488b7492834b-xtables-lock\") pod \"kube-proxy-dvx9r\" (UID: \"958d2a60-2f26-4e6e-bec6-488b7492834b\") " pod="kube-system/kube-proxy-dvx9r" Jan 17 12:34:06.130187 kubelet[2929]: I0117 12:34:06.130025 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mglrn\" (UniqueName: \"kubernetes.io/projected/958d2a60-2f26-4e6e-bec6-488b7492834b-kube-api-access-mglrn\") pod \"kube-proxy-dvx9r\" (UID: \"958d2a60-2f26-4e6e-bec6-488b7492834b\") " pod="kube-system/kube-proxy-dvx9r" Jan 17 12:34:06.130187 kubelet[2929]: I0117 12:34:06.130057 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/958d2a60-2f26-4e6e-bec6-488b7492834b-lib-modules\") pod \"kube-proxy-dvx9r\" (UID: \"958d2a60-2f26-4e6e-bec6-488b7492834b\") " pod="kube-system/kube-proxy-dvx9r" Jan 17 12:34:06.315496 containerd[1621]: time="2025-01-17T12:34:06.312494646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvx9r,Uid:958d2a60-2f26-4e6e-bec6-488b7492834b,Namespace:kube-system,Attempt:0,}" Jan 17 12:34:06.325862 kubelet[2929]: I0117 12:34:06.323227 2929 topology_manager.go:215] "Topology Admit Handler" podUID="b0c5a017-2799-43f7-8acd-a406ca1067da" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-865kz" Jan 17 12:34:06.379753 containerd[1621]: time="2025-01-17T12:34:06.378995774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:34:06.379753 containerd[1621]: time="2025-01-17T12:34:06.379664047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:34:06.379753 containerd[1621]: time="2025-01-17T12:34:06.379707991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:06.380718 containerd[1621]: time="2025-01-17T12:34:06.380339928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:06.413766 systemd[1]: run-containerd-runc-k8s.io-81761579880ac2930ac8d15dbb775a4abfd5695589abab4ca77efebc9e3ec2f9-runc.Gq6aGQ.mount: Deactivated successfully. Jan 17 12:34:06.432675 kubelet[2929]: I0117 12:34:06.432498 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b0c5a017-2799-43f7-8acd-a406ca1067da-var-lib-calico\") pod \"tigera-operator-c7ccbd65-865kz\" (UID: \"b0c5a017-2799-43f7-8acd-a406ca1067da\") " pod="tigera-operator/tigera-operator-c7ccbd65-865kz" Jan 17 12:34:06.432675 kubelet[2929]: I0117 12:34:06.432579 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4k5b\" (UniqueName: \"kubernetes.io/projected/b0c5a017-2799-43f7-8acd-a406ca1067da-kube-api-access-r4k5b\") pod \"tigera-operator-c7ccbd65-865kz\" (UID: \"b0c5a017-2799-43f7-8acd-a406ca1067da\") " pod="tigera-operator/tigera-operator-c7ccbd65-865kz" Jan 17 12:34:06.444984 containerd[1621]: time="2025-01-17T12:34:06.444546646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvx9r,Uid:958d2a60-2f26-4e6e-bec6-488b7492834b,Namespace:kube-system,Attempt:0,} returns sandbox id \"81761579880ac2930ac8d15dbb775a4abfd5695589abab4ca77efebc9e3ec2f9\"" Jan 17 12:34:06.450651 containerd[1621]: time="2025-01-17T12:34:06.450614941Z" level=info msg="CreateContainer within sandbox \"81761579880ac2930ac8d15dbb775a4abfd5695589abab4ca77efebc9e3ec2f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:34:06.466366 containerd[1621]: time="2025-01-17T12:34:06.466322780Z" level=info msg="CreateContainer within sandbox \"81761579880ac2930ac8d15dbb775a4abfd5695589abab4ca77efebc9e3ec2f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"99313128d1c4ebd2ab9f703c6cb5b4a2433c65cfe036f081248615f380e94048\"" Jan 17 12:34:06.467769 containerd[1621]: time="2025-01-17T12:34:06.467733623Z" level=info msg="StartContainer for \"99313128d1c4ebd2ab9f703c6cb5b4a2433c65cfe036f081248615f380e94048\"" Jan 17 12:34:06.577102 containerd[1621]: time="2025-01-17T12:34:06.576683564Z" level=info msg="StartContainer for \"99313128d1c4ebd2ab9f703c6cb5b4a2433c65cfe036f081248615f380e94048\" returns successfully" Jan 17 12:34:06.635793 containerd[1621]: time="2025-01-17T12:34:06.635740311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-865kz,Uid:b0c5a017-2799-43f7-8acd-a406ca1067da,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:34:06.676753 containerd[1621]: time="2025-01-17T12:34:06.676072841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:34:06.676753 containerd[1621]: time="2025-01-17T12:34:06.676160824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:34:06.676753 containerd[1621]: time="2025-01-17T12:34:06.676235240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:06.678380 containerd[1621]: time="2025-01-17T12:34:06.676643247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:06.768394 kubelet[2929]: I0117 12:34:06.766974 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dvx9r" podStartSLOduration=1.766820995 podStartE2EDuration="1.766820995s" podCreationTimestamp="2025-01-17 12:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:34:06.764717011 +0000 UTC m=+13.342567681" watchObservedRunningTime="2025-01-17 12:34:06.766820995 +0000 UTC m=+13.344671669" Jan 17 12:34:06.786190 containerd[1621]: time="2025-01-17T12:34:06.786110998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-865kz,Uid:b0c5a017-2799-43f7-8acd-a406ca1067da,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2b1e5fd3b9a70e6b41c4f86688e38307cad8ff6925012f6234bb51c482cc9fbb\"" Jan 17 12:34:06.802187 containerd[1621]: time="2025-01-17T12:34:06.801401245Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:34:11.340929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217369937.mount: Deactivated successfully. Jan 17 12:34:12.233127 containerd[1621]: time="2025-01-17T12:34:12.233035266Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:12.234268 containerd[1621]: time="2025-01-17T12:34:12.234229487Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764301" Jan 17 12:34:12.235298 containerd[1621]: time="2025-01-17T12:34:12.235225288Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:12.238047 containerd[1621]: time="2025-01-17T12:34:12.237988759Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:12.239496 containerd[1621]: time="2025-01-17T12:34:12.239246577Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.437782879s" Jan 17 12:34:12.239496 containerd[1621]: time="2025-01-17T12:34:12.239306424Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:34:12.259030 containerd[1621]: time="2025-01-17T12:34:12.258969104Z" level=info msg="CreateContainer within sandbox \"2b1e5fd3b9a70e6b41c4f86688e38307cad8ff6925012f6234bb51c482cc9fbb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:34:12.288123 containerd[1621]: time="2025-01-17T12:34:12.287929386Z" level=info msg="CreateContainer within sandbox \"2b1e5fd3b9a70e6b41c4f86688e38307cad8ff6925012f6234bb51c482cc9fbb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"180d85ed462854230857591009bdeafd2839e0e8cf8932592d41cb550785eb84\"" Jan 17 12:34:12.291317 containerd[1621]: time="2025-01-17T12:34:12.290708068Z" level=info msg="StartContainer for \"180d85ed462854230857591009bdeafd2839e0e8cf8932592d41cb550785eb84\"" Jan 17 12:34:12.393550 containerd[1621]: time="2025-01-17T12:34:12.393500284Z" level=info msg="StartContainer for \"180d85ed462854230857591009bdeafd2839e0e8cf8932592d41cb550785eb84\" returns successfully" Jan 17 12:34:12.785477 kubelet[2929]: I0117 12:34:12.785419 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-865kz" podStartSLOduration=1.332477366 podStartE2EDuration="6.785365069s" podCreationTimestamp="2025-01-17 12:34:06 +0000 UTC" firstStartedPulling="2025-01-17 12:34:06.789233179 +0000 UTC m=+13.367083836" lastFinishedPulling="2025-01-17 12:34:12.242120881 +0000 UTC m=+18.819971539" observedRunningTime="2025-01-17 12:34:12.780487511 +0000 UTC m=+19.358338183" watchObservedRunningTime="2025-01-17 12:34:12.785365069 +0000 UTC m=+19.363215736" Jan 17 12:34:15.961798 kubelet[2929]: I0117 12:34:15.961639 2929 topology_manager.go:215] "Topology Admit Handler" podUID="fde02600-c6e4-4397-916b-c99294bad395" podNamespace="calico-system" podName="calico-typha-8c94648d8-zzp4b" Jan 17 12:34:15.985207 kubelet[2929]: W0117 12:34:15.985047 2929 reflector.go:539] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:srv-hkhka.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-hkhka.gb1.brightbox.com' and this object Jan 17 12:34:15.985858 kubelet[2929]: E0117 12:34:15.985833 2929 reflector.go:147] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:srv-hkhka.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-hkhka.gb1.brightbox.com' and this object Jan 17 12:34:15.995680 kubelet[2929]: W0117 12:34:15.995431 2929 reflector.go:539] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:srv-hkhka.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-hkhka.gb1.brightbox.com' and this object Jan 17 12:34:15.995680 kubelet[2929]: E0117 12:34:15.995539 2929 reflector.go:147] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:srv-hkhka.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-hkhka.gb1.brightbox.com' and this object Jan 17 12:34:15.996123 kubelet[2929]: W0117 12:34:15.995961 2929 reflector.go:539] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-hkhka.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-hkhka.gb1.brightbox.com' and this object Jan 17 12:34:15.996123 kubelet[2929]: E0117 12:34:15.995991 2929 reflector.go:147] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-hkhka.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-hkhka.gb1.brightbox.com' and this object Jan 17 12:34:16.105838 kubelet[2929]: I0117 12:34:16.105769 2929 topology_manager.go:215] "Topology Admit Handler" podUID="82784ad9-6b01-4939-a1f9-19a61cc0aa71" podNamespace="calico-system" podName="calico-node-qpfg4" Jan 17 12:34:16.116922 kubelet[2929]: I0117 12:34:16.116857 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/82784ad9-6b01-4939-a1f9-19a61cc0aa71-var-run-calico\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.117139 kubelet[2929]: I0117 12:34:16.116943 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/82784ad9-6b01-4939-a1f9-19a61cc0aa71-policysync\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.117139 kubelet[2929]: I0117 12:34:16.117003 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/82784ad9-6b01-4939-a1f9-19a61cc0aa71-var-lib-calico\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.120046 kubelet[2929]: I0117 12:34:16.117043 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/82784ad9-6b01-4939-a1f9-19a61cc0aa71-node-certs\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.120046 kubelet[2929]: I0117 12:34:16.117350 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fde02600-c6e4-4397-916b-c99294bad395-tigera-ca-bundle\") pod \"calico-typha-8c94648d8-zzp4b\" (UID: \"fde02600-c6e4-4397-916b-c99294bad395\") " pod="calico-system/calico-typha-8c94648d8-zzp4b" Jan 17 12:34:16.120046 kubelet[2929]: I0117 12:34:16.117444 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82784ad9-6b01-4939-a1f9-19a61cc0aa71-xtables-lock\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.120046 kubelet[2929]: I0117 12:34:16.117479 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82784ad9-6b01-4939-a1f9-19a61cc0aa71-lib-modules\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.120046 kubelet[2929]: I0117 12:34:16.117832 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/82784ad9-6b01-4939-a1f9-19a61cc0aa71-cni-bin-dir\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.120553 kubelet[2929]: I0117 12:34:16.117997 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/82784ad9-6b01-4939-a1f9-19a61cc0aa71-cni-log-dir\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.120553 kubelet[2929]: I0117 12:34:16.118152 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82784ad9-6b01-4939-a1f9-19a61cc0aa71-tigera-ca-bundle\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.120553 kubelet[2929]: I0117 12:34:16.118197 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/82784ad9-6b01-4939-a1f9-19a61cc0aa71-cni-net-dir\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.120553 kubelet[2929]: I0117 12:34:16.118429 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fde02600-c6e4-4397-916b-c99294bad395-typha-certs\") pod \"calico-typha-8c94648d8-zzp4b\" (UID: \"fde02600-c6e4-4397-916b-c99294bad395\") " pod="calico-system/calico-typha-8c94648d8-zzp4b" Jan 17 12:34:16.120553 kubelet[2929]: I0117 12:34:16.118721 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lzwx\" (UniqueName: \"kubernetes.io/projected/fde02600-c6e4-4397-916b-c99294bad395-kube-api-access-7lzwx\") pod \"calico-typha-8c94648d8-zzp4b\" (UID: \"fde02600-c6e4-4397-916b-c99294bad395\") " pod="calico-system/calico-typha-8c94648d8-zzp4b" Jan 17 12:34:16.220935 kubelet[2929]: I0117 12:34:16.219601 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwl25\" (UniqueName: \"kubernetes.io/projected/82784ad9-6b01-4939-a1f9-19a61cc0aa71-kube-api-access-jwl25\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.220935 kubelet[2929]: I0117 12:34:16.219800 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/82784ad9-6b01-4939-a1f9-19a61cc0aa71-flexvol-driver-host\") pod \"calico-node-qpfg4\" (UID: \"82784ad9-6b01-4939-a1f9-19a61cc0aa71\") " pod="calico-system/calico-node-qpfg4" Jan 17 12:34:16.239418 kubelet[2929]: I0117 12:34:16.239365 2929 topology_manager.go:215] "Topology Admit Handler" podUID="d743c195-9b4b-4bf7-a2e2-343eb8b2964b" podNamespace="calico-system" podName="csi-node-driver-xk8nd" Jan 17 12:34:16.251393 kubelet[2929]: E0117 12:34:16.251346 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk8nd" podUID="d743c195-9b4b-4bf7-a2e2-343eb8b2964b" Jan 17 12:34:16.320684 kubelet[2929]: I0117 12:34:16.320620 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d743c195-9b4b-4bf7-a2e2-343eb8b2964b-varrun\") pod \"csi-node-driver-xk8nd\" (UID: \"d743c195-9b4b-4bf7-a2e2-343eb8b2964b\") " pod="calico-system/csi-node-driver-xk8nd" Jan 17 12:34:16.321058 kubelet[2929]: I0117 12:34:16.320849 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d743c195-9b4b-4bf7-a2e2-343eb8b2964b-socket-dir\") pod \"csi-node-driver-xk8nd\" (UID: \"d743c195-9b4b-4bf7-a2e2-343eb8b2964b\") " pod="calico-system/csi-node-driver-xk8nd" Jan 17 12:34:16.321058 kubelet[2929]: I0117 12:34:16.320904 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d743c195-9b4b-4bf7-a2e2-343eb8b2964b-kubelet-dir\") pod \"csi-node-driver-xk8nd\" (UID: \"d743c195-9b4b-4bf7-a2e2-343eb8b2964b\") " pod="calico-system/csi-node-driver-xk8nd" Jan 17 12:34:16.321058 kubelet[2929]: I0117 12:34:16.320961 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpcxk\" (UniqueName: \"kubernetes.io/projected/d743c195-9b4b-4bf7-a2e2-343eb8b2964b-kube-api-access-hpcxk\") pod \"csi-node-driver-xk8nd\" (UID: \"d743c195-9b4b-4bf7-a2e2-343eb8b2964b\") " pod="calico-system/csi-node-driver-xk8nd" Jan 17 12:34:16.321058 kubelet[2929]: I0117 12:34:16.321033 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d743c195-9b4b-4bf7-a2e2-343eb8b2964b-registration-dir\") pod \"csi-node-driver-xk8nd\" (UID: \"d743c195-9b4b-4bf7-a2e2-343eb8b2964b\") " pod="calico-system/csi-node-driver-xk8nd" Jan 17 12:34:16.422320 kubelet[2929]: E0117 12:34:16.422213 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.422320 kubelet[2929]: W0117 12:34:16.422267 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.423947 kubelet[2929]: E0117 12:34:16.423888 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.424309 kubelet[2929]: E0117 12:34:16.424258 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.424309 kubelet[2929]: W0117 12:34:16.424306 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.424458 kubelet[2929]: E0117 12:34:16.424331 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.424667 kubelet[2929]: E0117 12:34:16.424647 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.424667 kubelet[2929]: W0117 12:34:16.424667 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.424791 kubelet[2929]: E0117 12:34:16.424686 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.424978 kubelet[2929]: E0117 12:34:16.424960 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.424978 kubelet[2929]: W0117 12:34:16.424979 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.425140 kubelet[2929]: E0117 12:34:16.425010 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.425337 kubelet[2929]: E0117 12:34:16.425318 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.425337 kubelet[2929]: W0117 12:34:16.425337 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.425638 kubelet[2929]: E0117 12:34:16.425356 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.425770 kubelet[2929]: E0117 12:34:16.425643 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.425770 kubelet[2929]: W0117 12:34:16.425657 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.425770 kubelet[2929]: E0117 12:34:16.425675 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.426163 kubelet[2929]: E0117 12:34:16.425944 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.426163 kubelet[2929]: W0117 12:34:16.425957 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.426163 kubelet[2929]: E0117 12:34:16.425984 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.426756 kubelet[2929]: E0117 12:34:16.426677 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.426756 kubelet[2929]: W0117 12:34:16.426700 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.426964 kubelet[2929]: E0117 12:34:16.426844 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.427146 kubelet[2929]: E0117 12:34:16.427114 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.427146 kubelet[2929]: W0117 12:34:16.427135 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.427421 kubelet[2929]: E0117 12:34:16.427162 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.427495 kubelet[2929]: E0117 12:34:16.427437 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.427495 kubelet[2929]: W0117 12:34:16.427451 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.427495 kubelet[2929]: E0117 12:34:16.427468 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.427730 kubelet[2929]: E0117 12:34:16.427704 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.427730 kubelet[2929]: W0117 12:34:16.427720 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.427973 kubelet[2929]: E0117 12:34:16.427865 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.428040 kubelet[2929]: E0117 12:34:16.427985 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.428040 kubelet[2929]: W0117 12:34:16.427998 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.428447 kubelet[2929]: E0117 12:34:16.428425 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.428578 kubelet[2929]: E0117 12:34:16.428559 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.428670 kubelet[2929]: W0117 12:34:16.428578 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.428810 kubelet[2929]: E0117 12:34:16.428762 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.429041 kubelet[2929]: E0117 12:34:16.429022 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.429041 kubelet[2929]: W0117 12:34:16.429041 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.429420 kubelet[2929]: E0117 12:34:16.429362 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.429684 kubelet[2929]: E0117 12:34:16.429458 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.429684 kubelet[2929]: W0117 12:34:16.429470 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.429684 kubelet[2929]: E0117 12:34:16.429497 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.429966 kubelet[2929]: E0117 12:34:16.429935 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.430246 kubelet[2929]: W0117 12:34:16.430043 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.430246 kubelet[2929]: E0117 12:34:16.430093 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.430517 kubelet[2929]: E0117 12:34:16.430497 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.430626 kubelet[2929]: W0117 12:34:16.430606 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.431375 kubelet[2929]: E0117 12:34:16.431194 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.431539 kubelet[2929]: E0117 12:34:16.431520 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.431759 kubelet[2929]: W0117 12:34:16.431627 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.431759 kubelet[2929]: E0117 12:34:16.431685 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.432186 kubelet[2929]: E0117 12:34:16.432050 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.432186 kubelet[2929]: W0117 12:34:16.432068 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.432186 kubelet[2929]: E0117 12:34:16.432162 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.432668 kubelet[2929]: E0117 12:34:16.432517 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.432668 kubelet[2929]: W0117 12:34:16.432535 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.432668 kubelet[2929]: E0117 12:34:16.432579 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.433028 kubelet[2929]: E0117 12:34:16.432897 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.433028 kubelet[2929]: W0117 12:34:16.432914 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.433028 kubelet[2929]: E0117 12:34:16.432949 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.433532 kubelet[2929]: E0117 12:34:16.433398 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.433532 kubelet[2929]: W0117 12:34:16.433416 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.433532 kubelet[2929]: E0117 12:34:16.433476 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.434015 kubelet[2929]: E0117 12:34:16.433864 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.434015 kubelet[2929]: W0117 12:34:16.433881 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.434265 kubelet[2929]: E0117 12:34:16.434153 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.434555 kubelet[2929]: E0117 12:34:16.434421 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.434555 kubelet[2929]: W0117 12:34:16.434439 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.434684 kubelet[2929]: E0117 12:34:16.434561 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.435017 kubelet[2929]: E0117 12:34:16.434899 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.435017 kubelet[2929]: W0117 12:34:16.434918 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.435017 kubelet[2929]: E0117 12:34:16.435012 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.435589 kubelet[2929]: E0117 12:34:16.435408 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.435589 kubelet[2929]: W0117 12:34:16.435426 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.435589 kubelet[2929]: E0117 12:34:16.435462 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.436049 kubelet[2929]: E0117 12:34:16.435927 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.436049 kubelet[2929]: W0117 12:34:16.435946 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.436049 kubelet[2929]: E0117 12:34:16.435983 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.436560 kubelet[2929]: E0117 12:34:16.436375 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.436560 kubelet[2929]: W0117 12:34:16.436392 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.436560 kubelet[2929]: E0117 12:34:16.436419 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.436987 kubelet[2929]: E0117 12:34:16.436818 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.436987 kubelet[2929]: W0117 12:34:16.436837 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.436987 kubelet[2929]: E0117 12:34:16.436886 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.437261 kubelet[2929]: E0117 12:34:16.437242 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.437396 kubelet[2929]: W0117 12:34:16.437375 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.437528 kubelet[2929]: E0117 12:34:16.437508 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.532246 kubelet[2929]: E0117 12:34:16.531984 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.532246 kubelet[2929]: W0117 12:34:16.532027 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.532246 kubelet[2929]: E0117 12:34:16.532067 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.533354 kubelet[2929]: E0117 12:34:16.533319 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.533354 kubelet[2929]: W0117 12:34:16.533340 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.533509 kubelet[2929]: E0117 12:34:16.533359 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.533691 kubelet[2929]: E0117 12:34:16.533666 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.533691 kubelet[2929]: W0117 12:34:16.533687 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.533823 kubelet[2929]: E0117 12:34:16.533705 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.534008 kubelet[2929]: E0117 12:34:16.533984 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.534008 kubelet[2929]: W0117 12:34:16.534003 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.534141 kubelet[2929]: E0117 12:34:16.534021 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.534841 kubelet[2929]: E0117 12:34:16.534811 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.534841 kubelet[2929]: W0117 12:34:16.534837 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.535001 kubelet[2929]: E0117 12:34:16.534854 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.535520 kubelet[2929]: E0117 12:34:16.535484 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.535520 kubelet[2929]: W0117 12:34:16.535515 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.535670 kubelet[2929]: E0117 12:34:16.535534 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.636747 kubelet[2929]: E0117 12:34:16.636683 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.636936 kubelet[2929]: W0117 12:34:16.636833 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.636936 kubelet[2929]: E0117 12:34:16.636866 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.637360 kubelet[2929]: E0117 12:34:16.637339 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.637360 kubelet[2929]: W0117 12:34:16.637359 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.637521 kubelet[2929]: E0117 12:34:16.637409 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.637800 kubelet[2929]: E0117 12:34:16.637780 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.637800 kubelet[2929]: W0117 12:34:16.637800 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.637930 kubelet[2929]: E0117 12:34:16.637829 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.638187 kubelet[2929]: E0117 12:34:16.638169 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.638262 kubelet[2929]: W0117 12:34:16.638187 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.638262 kubelet[2929]: E0117 12:34:16.638234 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.638596 kubelet[2929]: E0117 12:34:16.638576 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.638596 kubelet[2929]: W0117 12:34:16.638595 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.638700 kubelet[2929]: E0117 12:34:16.638637 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.638955 kubelet[2929]: E0117 12:34:16.638936 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.639042 kubelet[2929]: W0117 12:34:16.638964 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.639042 kubelet[2929]: E0117 12:34:16.638983 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.740181 kubelet[2929]: E0117 12:34:16.740144 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.740636 kubelet[2929]: W0117 12:34:16.740433 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.740636 kubelet[2929]: E0117 12:34:16.740474 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.740902 kubelet[2929]: E0117 12:34:16.740885 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.741162 kubelet[2929]: W0117 12:34:16.741008 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.741162 kubelet[2929]: E0117 12:34:16.741049 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.741821 kubelet[2929]: E0117 12:34:16.741658 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.741821 kubelet[2929]: W0117 12:34:16.741676 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.741821 kubelet[2929]: E0117 12:34:16.741695 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.742316 kubelet[2929]: E0117 12:34:16.742137 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.742316 kubelet[2929]: W0117 12:34:16.742155 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.742316 kubelet[2929]: E0117 12:34:16.742184 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.742885 kubelet[2929]: E0117 12:34:16.742713 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.742885 kubelet[2929]: W0117 12:34:16.742731 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.742885 kubelet[2929]: E0117 12:34:16.742749 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.743252 kubelet[2929]: E0117 12:34:16.743172 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.743252 kubelet[2929]: W0117 12:34:16.743190 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.743252 kubelet[2929]: E0117 12:34:16.743208 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.844269 kubelet[2929]: E0117 12:34:16.843912 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.844269 kubelet[2929]: W0117 12:34:16.843948 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.844269 kubelet[2929]: E0117 12:34:16.843973 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.844800 kubelet[2929]: E0117 12:34:16.844780 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.845026 kubelet[2929]: W0117 12:34:16.845002 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.845493 kubelet[2929]: E0117 12:34:16.845191 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.845897 kubelet[2929]: E0117 12:34:16.845878 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.846170 kubelet[2929]: W0117 12:34:16.845995 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.846170 kubelet[2929]: E0117 12:34:16.846021 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.846661 kubelet[2929]: E0117 12:34:16.846505 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.846661 kubelet[2929]: W0117 12:34:16.846523 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.846661 kubelet[2929]: E0117 12:34:16.846541 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.847156 kubelet[2929]: E0117 12:34:16.846998 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.847156 kubelet[2929]: W0117 12:34:16.847015 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.847156 kubelet[2929]: E0117 12:34:16.847033 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.847665 kubelet[2929]: E0117 12:34:16.847563 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.847665 kubelet[2929]: W0117 12:34:16.847587 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.847665 kubelet[2929]: E0117 12:34:16.847604 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.902693 kubelet[2929]: E0117 12:34:16.902561 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.902693 kubelet[2929]: W0117 12:34:16.902599 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.902693 kubelet[2929]: E0117 12:34:16.902628 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.949258 kubelet[2929]: E0117 12:34:16.949084 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.949258 kubelet[2929]: W0117 12:34:16.949118 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.949258 kubelet[2929]: E0117 12:34:16.949147 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.950016 kubelet[2929]: E0117 12:34:16.949830 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.950016 kubelet[2929]: W0117 12:34:16.949849 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.950016 kubelet[2929]: E0117 12:34:16.949868 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.950341 kubelet[2929]: E0117 12:34:16.950321 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.950599 kubelet[2929]: W0117 12:34:16.950413 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.950599 kubelet[2929]: E0117 12:34:16.950440 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.950795 kubelet[2929]: E0117 12:34:16.950776 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.950901 kubelet[2929]: W0117 12:34:16.950882 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.951010 kubelet[2929]: E0117 12:34:16.950993 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:16.951527 kubelet[2929]: E0117 12:34:16.951445 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:16.951527 kubelet[2929]: W0117 12:34:16.951464 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:16.951527 kubelet[2929]: E0117 12:34:16.951483 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:17.051339 kubelet[2929]: E0117 12:34:17.048533 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:17.051339 kubelet[2929]: W0117 12:34:17.048564 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:17.051339 kubelet[2929]: E0117 12:34:17.048592 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:17.052119 kubelet[2929]: E0117 12:34:17.051496 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:17.052119 kubelet[2929]: W0117 12:34:17.051511 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:17.052119 kubelet[2929]: E0117 12:34:17.051540 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:17.052974 kubelet[2929]: E0117 12:34:17.052864 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:17.052974 kubelet[2929]: W0117 12:34:17.052884 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:17.052974 kubelet[2929]: E0117 12:34:17.052903 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:17.056300 kubelet[2929]: E0117 12:34:17.054422 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:17.056300 kubelet[2929]: W0117 12:34:17.054450 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:17.056300 kubelet[2929]: E0117 12:34:17.054913 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:17.059309 kubelet[2929]: E0117 12:34:17.058365 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:17.059309 kubelet[2929]: W0117 12:34:17.058388 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:17.059309 kubelet[2929]: E0117 12:34:17.058656 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:17.063388 kubelet[2929]: E0117 12:34:17.063367 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:17.064273 kubelet[2929]: W0117 12:34:17.064236 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:17.064858 kubelet[2929]: E0117 12:34:17.064590 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:17.097812 kubelet[2929]: E0117 12:34:17.097629 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:17.097812 kubelet[2929]: W0117 12:34:17.097659 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:17.097812 kubelet[2929]: E0117 12:34:17.097684 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:17.105248 kubelet[2929]: E0117 12:34:17.105222 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:17.105248 kubelet[2929]: W0117 12:34:17.105247 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:17.105474 kubelet[2929]: E0117 12:34:17.105268 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:17.207241 containerd[1621]: time="2025-01-17T12:34:17.206998730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8c94648d8-zzp4b,Uid:fde02600-c6e4-4397-916b-c99294bad395,Namespace:calico-system,Attempt:0,}" Jan 17 12:34:17.260589 containerd[1621]: time="2025-01-17T12:34:17.259469699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:34:17.262147 containerd[1621]: time="2025-01-17T12:34:17.261707195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:34:17.262147 containerd[1621]: time="2025-01-17T12:34:17.261761314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:17.262147 containerd[1621]: time="2025-01-17T12:34:17.261904208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:17.317515 containerd[1621]: time="2025-01-17T12:34:17.317446965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qpfg4,Uid:82784ad9-6b01-4939-a1f9-19a61cc0aa71,Namespace:calico-system,Attempt:0,}" Jan 17 12:34:17.392811 containerd[1621]: time="2025-01-17T12:34:17.391774680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:34:17.392811 containerd[1621]: time="2025-01-17T12:34:17.391893482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:34:17.392811 containerd[1621]: time="2025-01-17T12:34:17.391919345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:17.393586 containerd[1621]: time="2025-01-17T12:34:17.392080543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:17.432754 containerd[1621]: time="2025-01-17T12:34:17.432585892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8c94648d8-zzp4b,Uid:fde02600-c6e4-4397-916b-c99294bad395,Namespace:calico-system,Attempt:0,} returns sandbox id \"92d3bbdef470b2538187172e7f93ea653ff70f939a6b9a7a500f8e051b30ac04\"" Jan 17 12:34:17.438824 containerd[1621]: time="2025-01-17T12:34:17.437136006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:34:17.486431 containerd[1621]: time="2025-01-17T12:34:17.486351978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qpfg4,Uid:82784ad9-6b01-4939-a1f9-19a61cc0aa71,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f17fc2939011577a91b04fcffebdf626cf5d9f274f7124696d996d4239e05e8\"" Jan 17 12:34:17.682168 kubelet[2929]: E0117 12:34:17.681657 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk8nd" podUID="d743c195-9b4b-4bf7-a2e2-343eb8b2964b" Jan 17 12:34:19.007425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3557460360.mount: Deactivated successfully. Jan 17 12:34:19.682301 kubelet[2929]: E0117 12:34:19.681616 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk8nd" podUID="d743c195-9b4b-4bf7-a2e2-343eb8b2964b" Jan 17 12:34:20.660689 containerd[1621]: time="2025-01-17T12:34:20.660634516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:20.663376 containerd[1621]: time="2025-01-17T12:34:20.663305371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:34:20.664780 containerd[1621]: time="2025-01-17T12:34:20.664719866Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:20.674646 containerd[1621]: time="2025-01-17T12:34:20.674262052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:20.679130 containerd[1621]: time="2025-01-17T12:34:20.679068424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.241874808s" Jan 17 12:34:20.679576 containerd[1621]: time="2025-01-17T12:34:20.679341116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:34:20.682121 containerd[1621]: time="2025-01-17T12:34:20.680894417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:34:20.706167 containerd[1621]: time="2025-01-17T12:34:20.705835402Z" level=info msg="CreateContainer within sandbox \"92d3bbdef470b2538187172e7f93ea653ff70f939a6b9a7a500f8e051b30ac04\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:34:20.725025 containerd[1621]: time="2025-01-17T12:34:20.724802217Z" level=info msg="CreateContainer within sandbox \"92d3bbdef470b2538187172e7f93ea653ff70f939a6b9a7a500f8e051b30ac04\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c236b8a48a7c0ba15fc515046c1b1267dd56097c0f63c25e2987a7467683d8ab\"" Jan 17 12:34:20.729945 containerd[1621]: time="2025-01-17T12:34:20.728715898Z" level=info msg="StartContainer for \"c236b8a48a7c0ba15fc515046c1b1267dd56097c0f63c25e2987a7467683d8ab\"" Jan 17 12:34:20.734152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3574836188.mount: Deactivated successfully. Jan 17 12:34:20.870450 containerd[1621]: time="2025-01-17T12:34:20.870148572Z" level=info msg="StartContainer for \"c236b8a48a7c0ba15fc515046c1b1267dd56097c0f63c25e2987a7467683d8ab\" returns successfully" Jan 17 12:34:21.680939 kubelet[2929]: E0117 12:34:21.680713 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk8nd" podUID="d743c195-9b4b-4bf7-a2e2-343eb8b2964b" Jan 17 12:34:21.897712 kubelet[2929]: I0117 12:34:21.894579 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-8c94648d8-zzp4b" podStartSLOduration=3.648382689 podStartE2EDuration="6.892676379s" podCreationTimestamp="2025-01-17 12:34:15 +0000 UTC" firstStartedPulling="2025-01-17 12:34:17.435580403 +0000 UTC m=+24.013431058" lastFinishedPulling="2025-01-17 12:34:20.679874077 +0000 UTC m=+27.257724748" observedRunningTime="2025-01-17 12:34:21.887528158 +0000 UTC m=+28.465378836" watchObservedRunningTime="2025-01-17 12:34:21.892676379 +0000 UTC m=+28.470527044" Jan 17 12:34:21.965001 kubelet[2929]: E0117 12:34:21.964814 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.965001 kubelet[2929]: W0117 12:34:21.964870 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.965001 kubelet[2929]: E0117 12:34:21.964958 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.965488 kubelet[2929]: E0117 12:34:21.965351 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.965488 kubelet[2929]: W0117 12:34:21.965379 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.965488 kubelet[2929]: E0117 12:34:21.965398 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.966202 kubelet[2929]: E0117 12:34:21.965692 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.966202 kubelet[2929]: W0117 12:34:21.965706 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.966202 kubelet[2929]: E0117 12:34:21.965723 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.966202 kubelet[2929]: E0117 12:34:21.966067 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.966202 kubelet[2929]: W0117 12:34:21.966082 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.966202 kubelet[2929]: E0117 12:34:21.966099 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.967183 kubelet[2929]: E0117 12:34:21.966418 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.967183 kubelet[2929]: W0117 12:34:21.966444 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.967183 kubelet[2929]: E0117 12:34:21.966460 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.967183 kubelet[2929]: E0117 12:34:21.966811 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.967183 kubelet[2929]: W0117 12:34:21.966832 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.967183 kubelet[2929]: E0117 12:34:21.966849 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.967574 kubelet[2929]: E0117 12:34:21.967484 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.967574 kubelet[2929]: W0117 12:34:21.967519 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.967574 kubelet[2929]: E0117 12:34:21.967536 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.967833 kubelet[2929]: E0117 12:34:21.967801 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.967833 kubelet[2929]: W0117 12:34:21.967820 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.967996 kubelet[2929]: E0117 12:34:21.967838 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.968169 kubelet[2929]: E0117 12:34:21.968150 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.968169 kubelet[2929]: W0117 12:34:21.968168 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.968476 kubelet[2929]: E0117 12:34:21.968187 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.968476 kubelet[2929]: E0117 12:34:21.968453 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.968476 kubelet[2929]: W0117 12:34:21.968465 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.968665 kubelet[2929]: E0117 12:34:21.968480 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.968758 kubelet[2929]: E0117 12:34:21.968736 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.968758 kubelet[2929]: W0117 12:34:21.968757 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.968869 kubelet[2929]: E0117 12:34:21.968786 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.969149 kubelet[2929]: E0117 12:34:21.969036 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.969149 kubelet[2929]: W0117 12:34:21.969057 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.969149 kubelet[2929]: E0117 12:34:21.969074 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.970082 kubelet[2929]: E0117 12:34:21.969774 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.970082 kubelet[2929]: W0117 12:34:21.969795 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.970082 kubelet[2929]: E0117 12:34:21.969818 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.970082 kubelet[2929]: E0117 12:34:21.970077 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.970378 kubelet[2929]: W0117 12:34:21.970090 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.970378 kubelet[2929]: E0117 12:34:21.970107 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.970492 kubelet[2929]: E0117 12:34:21.970451 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.970492 kubelet[2929]: W0117 12:34:21.970465 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.970492 kubelet[2929]: E0117 12:34:21.970482 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.995959 kubelet[2929]: E0117 12:34:21.995809 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.995959 kubelet[2929]: W0117 12:34:21.995831 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.995959 kubelet[2929]: E0117 12:34:21.995851 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.996274 kubelet[2929]: E0117 12:34:21.996209 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.996274 kubelet[2929]: W0117 12:34:21.996223 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.996274 kubelet[2929]: E0117 12:34:21.996264 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.996890 kubelet[2929]: E0117 12:34:21.996779 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.996890 kubelet[2929]: W0117 12:34:21.996799 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.996890 kubelet[2929]: E0117 12:34:21.996832 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.997160 kubelet[2929]: E0117 12:34:21.997079 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.997160 kubelet[2929]: W0117 12:34:21.997093 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.997160 kubelet[2929]: E0117 12:34:21.997122 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.997469 kubelet[2929]: E0117 12:34:21.997449 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.997469 kubelet[2929]: W0117 12:34:21.997467 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.997689 kubelet[2929]: E0117 12:34:21.997492 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.997812 kubelet[2929]: E0117 12:34:21.997790 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.997812 kubelet[2929]: W0117 12:34:21.997808 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.998008 kubelet[2929]: E0117 12:34:21.997846 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:21.998576 kubelet[2929]: E0117 12:34:21.998395 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:21.998576 kubelet[2929]: W0117 12:34:21.998426 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:21.998576 kubelet[2929]: E0117 12:34:21.998526 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.000558 kubelet[2929]: E0117 12:34:21.998877 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:22.000558 kubelet[2929]: W0117 12:34:21.998905 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:22.000558 kubelet[2929]: E0117 12:34:21.999198 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:22.000558 kubelet[2929]: W0117 12:34:21.999212 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:22.000558 kubelet[2929]: E0117 12:34:21.999473 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:22.000558 kubelet[2929]: W0117 12:34:21.999485 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:22.000558 kubelet[2929]: E0117 12:34:21.999534 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.000558 kubelet[2929]: E0117 12:34:21.999785 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:22.000558 kubelet[2929]: W0117 12:34:21.999799 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:22.000558 kubelet[2929]: E0117 12:34:21.999816 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.000558 kubelet[2929]: E0117 12:34:22.000366 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:22.001123 kubelet[2929]: W0117 12:34:22.000392 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:22.001123 kubelet[2929]: E0117 12:34:22.000408 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.001123 kubelet[2929]: E0117 12:34:22.000969 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:22.001123 kubelet[2929]: W0117 12:34:22.000982 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:22.001123 kubelet[2929]: E0117 12:34:22.001000 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.001123 kubelet[2929]: E0117 12:34:22.001041 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.001835 kubelet[2929]: E0117 12:34:22.001326 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:22.001835 kubelet[2929]: W0117 12:34:22.001339 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:22.001835 kubelet[2929]: E0117 12:34:22.001363 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.001835 kubelet[2929]: E0117 12:34:22.001365 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.002790 kubelet[2929]: E0117 12:34:22.002164 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:22.002790 kubelet[2929]: W0117 12:34:22.002196 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:22.002790 kubelet[2929]: E0117 12:34:22.002222 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.003409 kubelet[2929]: E0117 12:34:22.003097 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:22.003409 kubelet[2929]: W0117 12:34:22.003142 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:22.003409 kubelet[2929]: E0117 12:34:22.003161 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.004305 kubelet[2929]: E0117 12:34:22.003549 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:22.004305 kubelet[2929]: W0117 12:34:22.003761 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:22.004305 kubelet[2929]: E0117 12:34:22.003783 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.005167 kubelet[2929]: E0117 12:34:22.005148 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:34:22.005416 kubelet[2929]: W0117 12:34:22.005216 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:34:22.005416 kubelet[2929]: E0117 12:34:22.005239 2929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:34:22.278523 containerd[1621]: time="2025-01-17T12:34:22.278402311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:34:22.282259 containerd[1621]: time="2025-01-17T12:34:22.282214361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.601268994s" Jan 17 12:34:22.282372 containerd[1621]: time="2025-01-17T12:34:22.282268407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:34:22.283451 containerd[1621]: time="2025-01-17T12:34:22.283397535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:22.284939 containerd[1621]: time="2025-01-17T12:34:22.284876495Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:22.286184 containerd[1621]: time="2025-01-17T12:34:22.286139961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:22.289079 containerd[1621]: time="2025-01-17T12:34:22.289020278Z" level=info msg="CreateContainer within sandbox \"3f17fc2939011577a91b04fcffebdf626cf5d9f274f7124696d996d4239e05e8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:34:22.325832 containerd[1621]: time="2025-01-17T12:34:22.325651312Z" level=info msg="CreateContainer within sandbox \"3f17fc2939011577a91b04fcffebdf626cf5d9f274f7124696d996d4239e05e8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"110b3c6d70578a75e75bc16a1d3920ad672cef4b8697a3287d9498fd2bd465c6\"" Jan 17 12:34:22.327009 containerd[1621]: time="2025-01-17T12:34:22.326970055Z" level=info msg="StartContainer for \"110b3c6d70578a75e75bc16a1d3920ad672cef4b8697a3287d9498fd2bd465c6\"" Jan 17 12:34:22.425972 containerd[1621]: time="2025-01-17T12:34:22.425757244Z" level=info msg="StartContainer for \"110b3c6d70578a75e75bc16a1d3920ad672cef4b8697a3287d9498fd2bd465c6\" returns successfully" Jan 17 12:34:22.487342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-110b3c6d70578a75e75bc16a1d3920ad672cef4b8697a3287d9498fd2bd465c6-rootfs.mount: Deactivated successfully. Jan 17 12:34:22.633014 containerd[1621]: time="2025-01-17T12:34:22.623032026Z" level=info msg="shim disconnected" id=110b3c6d70578a75e75bc16a1d3920ad672cef4b8697a3287d9498fd2bd465c6 namespace=k8s.io Jan 17 12:34:22.633014 containerd[1621]: time="2025-01-17T12:34:22.632925406Z" level=warning msg="cleaning up after shim disconnected" id=110b3c6d70578a75e75bc16a1d3920ad672cef4b8697a3287d9498fd2bd465c6 namespace=k8s.io Jan 17 12:34:22.633014 containerd[1621]: time="2025-01-17T12:34:22.632954606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:34:22.878160 kubelet[2929]: I0117 12:34:22.878102 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:34:22.881313 containerd[1621]: time="2025-01-17T12:34:22.881255180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:34:23.681743 kubelet[2929]: E0117 12:34:23.681040 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk8nd" podUID="d743c195-9b4b-4bf7-a2e2-343eb8b2964b" Jan 17 12:34:25.684598 kubelet[2929]: E0117 12:34:25.681696 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk8nd" podUID="d743c195-9b4b-4bf7-a2e2-343eb8b2964b" Jan 17 12:34:25.846924 kubelet[2929]: I0117 12:34:25.845681 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:34:27.681625 kubelet[2929]: E0117 12:34:27.680660 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk8nd" podUID="d743c195-9b4b-4bf7-a2e2-343eb8b2964b" Jan 17 12:34:29.484579 containerd[1621]: time="2025-01-17T12:34:29.484502157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:29.486733 containerd[1621]: time="2025-01-17T12:34:29.486657832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:34:29.487357 containerd[1621]: time="2025-01-17T12:34:29.487028603Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:29.490344 containerd[1621]: time="2025-01-17T12:34:29.490256061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:29.492218 containerd[1621]: time="2025-01-17T12:34:29.491914498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.610582377s" Jan 17 12:34:29.492218 containerd[1621]: time="2025-01-17T12:34:29.491969057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:34:29.496240 containerd[1621]: time="2025-01-17T12:34:29.496203176Z" level=info msg="CreateContainer within sandbox \"3f17fc2939011577a91b04fcffebdf626cf5d9f274f7124696d996d4239e05e8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:34:29.516703 containerd[1621]: time="2025-01-17T12:34:29.516621416Z" level=info msg="CreateContainer within sandbox \"3f17fc2939011577a91b04fcffebdf626cf5d9f274f7124696d996d4239e05e8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"deccb42001ed7f70e1d25c48bfc771c3c6cb460611fa5c01df87bff99866b817\"" Jan 17 12:34:29.518872 containerd[1621]: time="2025-01-17T12:34:29.518805067Z" level=info msg="StartContainer for \"deccb42001ed7f70e1d25c48bfc771c3c6cb460611fa5c01df87bff99866b817\"" Jan 17 12:34:29.639094 containerd[1621]: time="2025-01-17T12:34:29.638933387Z" level=info msg="StartContainer for \"deccb42001ed7f70e1d25c48bfc771c3c6cb460611fa5c01df87bff99866b817\" returns successfully" Jan 17 12:34:29.681887 kubelet[2929]: E0117 12:34:29.681364 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk8nd" podUID="d743c195-9b4b-4bf7-a2e2-343eb8b2964b" Jan 17 12:34:30.958034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-deccb42001ed7f70e1d25c48bfc771c3c6cb460611fa5c01df87bff99866b817-rootfs.mount: Deactivated successfully. Jan 17 12:34:30.962328 containerd[1621]: time="2025-01-17T12:34:30.961885673Z" level=info msg="shim disconnected" id=deccb42001ed7f70e1d25c48bfc771c3c6cb460611fa5c01df87bff99866b817 namespace=k8s.io Jan 17 12:34:30.962328 containerd[1621]: time="2025-01-17T12:34:30.962048560Z" level=warning msg="cleaning up after shim disconnected" id=deccb42001ed7f70e1d25c48bfc771c3c6cb460611fa5c01df87bff99866b817 namespace=k8s.io Jan 17 12:34:30.962328 containerd[1621]: time="2025-01-17T12:34:30.962073673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:34:30.978295 kubelet[2929]: I0117 12:34:30.978011 2929 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:34:31.029350 kubelet[2929]: I0117 12:34:31.028858 2929 topology_manager.go:215] "Topology Admit Handler" podUID="9c1443c2-e0fb-459e-a578-bd981d23c7b4" podNamespace="kube-system" podName="coredns-76f75df574-7qcnn" Jan 17 12:34:31.047568 kubelet[2929]: I0117 12:34:31.046681 2929 topology_manager.go:215] "Topology Admit Handler" podUID="31f2088e-c721-48df-8cef-797f0799b017" podNamespace="kube-system" podName="coredns-76f75df574-kfzfn" Jan 17 12:34:31.047568 kubelet[2929]: I0117 12:34:31.047110 2929 topology_manager.go:215] "Topology Admit Handler" podUID="a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7" podNamespace="calico-apiserver" podName="calico-apiserver-7554c79d77-tz9dc" Jan 17 12:34:31.052099 kubelet[2929]: I0117 12:34:31.052072 2929 topology_manager.go:215] "Topology Admit Handler" podUID="53904d6f-b51c-4d8b-962c-f67a32e4ae5e" podNamespace="calico-apiserver" podName="calico-apiserver-7554c79d77-w2z5v" Jan 17 12:34:31.052329 kubelet[2929]: I0117 12:34:31.052302 2929 topology_manager.go:215] "Topology Admit Handler" podUID="4980fedd-f173-462d-ae26-6b5775cf7947" podNamespace="calico-system" podName="calico-kube-controllers-86cbc7c54b-hswjc" Jan 17 12:34:31.068021 kubelet[2929]: I0117 12:34:31.067978 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4k4\" (UniqueName: \"kubernetes.io/projected/9c1443c2-e0fb-459e-a578-bd981d23c7b4-kube-api-access-jx4k4\") pod \"coredns-76f75df574-7qcnn\" (UID: \"9c1443c2-e0fb-459e-a578-bd981d23c7b4\") " pod="kube-system/coredns-76f75df574-7qcnn" Jan 17 12:34:31.068181 kubelet[2929]: I0117 12:34:31.068052 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c1443c2-e0fb-459e-a578-bd981d23c7b4-config-volume\") pod \"coredns-76f75df574-7qcnn\" (UID: \"9c1443c2-e0fb-459e-a578-bd981d23c7b4\") " pod="kube-system/coredns-76f75df574-7qcnn" Jan 17 12:34:31.168992 kubelet[2929]: I0117 12:34:31.168825 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7-calico-apiserver-certs\") pod \"calico-apiserver-7554c79d77-tz9dc\" (UID: \"a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7\") " pod="calico-apiserver/calico-apiserver-7554c79d77-tz9dc" Jan 17 12:34:31.170539 kubelet[2929]: I0117 12:34:31.169584 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31f2088e-c721-48df-8cef-797f0799b017-config-volume\") pod \"coredns-76f75df574-kfzfn\" (UID: \"31f2088e-c721-48df-8cef-797f0799b017\") " pod="kube-system/coredns-76f75df574-kfzfn" Jan 17 12:34:31.170539 kubelet[2929]: I0117 12:34:31.169671 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/53904d6f-b51c-4d8b-962c-f67a32e4ae5e-calico-apiserver-certs\") pod \"calico-apiserver-7554c79d77-w2z5v\" (UID: \"53904d6f-b51c-4d8b-962c-f67a32e4ae5e\") " pod="calico-apiserver/calico-apiserver-7554c79d77-w2z5v" Jan 17 12:34:31.170539 kubelet[2929]: I0117 12:34:31.169731 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7p6b\" (UniqueName: \"kubernetes.io/projected/53904d6f-b51c-4d8b-962c-f67a32e4ae5e-kube-api-access-l7p6b\") pod \"calico-apiserver-7554c79d77-w2z5v\" (UID: \"53904d6f-b51c-4d8b-962c-f67a32e4ae5e\") " pod="calico-apiserver/calico-apiserver-7554c79d77-w2z5v" Jan 17 12:34:31.170539 kubelet[2929]: I0117 12:34:31.169803 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pw6l\" (UniqueName: \"kubernetes.io/projected/4980fedd-f173-462d-ae26-6b5775cf7947-kube-api-access-2pw6l\") pod \"calico-kube-controllers-86cbc7c54b-hswjc\" (UID: \"4980fedd-f173-462d-ae26-6b5775cf7947\") " pod="calico-system/calico-kube-controllers-86cbc7c54b-hswjc" Jan 17 12:34:31.170539 kubelet[2929]: I0117 12:34:31.169841 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwt2\" (UniqueName: \"kubernetes.io/projected/31f2088e-c721-48df-8cef-797f0799b017-kube-api-access-nnwt2\") pod \"coredns-76f75df574-kfzfn\" (UID: \"31f2088e-c721-48df-8cef-797f0799b017\") " pod="kube-system/coredns-76f75df574-kfzfn" Jan 17 12:34:31.170856 kubelet[2929]: I0117 12:34:31.169879 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4980fedd-f173-462d-ae26-6b5775cf7947-tigera-ca-bundle\") pod \"calico-kube-controllers-86cbc7c54b-hswjc\" (UID: \"4980fedd-f173-462d-ae26-6b5775cf7947\") " pod="calico-system/calico-kube-controllers-86cbc7c54b-hswjc" Jan 17 12:34:31.170856 kubelet[2929]: I0117 12:34:31.169913 2929 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbdxn\" (UniqueName: \"kubernetes.io/projected/a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7-kube-api-access-rbdxn\") pod \"calico-apiserver-7554c79d77-tz9dc\" (UID: \"a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7\") " pod="calico-apiserver/calico-apiserver-7554c79d77-tz9dc" Jan 17 12:34:31.370991 containerd[1621]: time="2025-01-17T12:34:31.370616165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kfzfn,Uid:31f2088e-c721-48df-8cef-797f0799b017,Namespace:kube-system,Attempt:0,}" Jan 17 12:34:31.371350 containerd[1621]: time="2025-01-17T12:34:31.371055048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7554c79d77-tz9dc,Uid:a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:34:31.375683 containerd[1621]: time="2025-01-17T12:34:31.375649713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7554c79d77-w2z5v,Uid:53904d6f-b51c-4d8b-962c-f67a32e4ae5e,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:34:31.376110 containerd[1621]: time="2025-01-17T12:34:31.376081283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7qcnn,Uid:9c1443c2-e0fb-459e-a578-bd981d23c7b4,Namespace:kube-system,Attempt:0,}" Jan 17 12:34:31.382768 containerd[1621]: time="2025-01-17T12:34:31.382663095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86cbc7c54b-hswjc,Uid:4980fedd-f173-462d-ae26-6b5775cf7947,Namespace:calico-system,Attempt:0,}" Jan 17 12:34:31.691386 containerd[1621]: time="2025-01-17T12:34:31.690124312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xk8nd,Uid:d743c195-9b4b-4bf7-a2e2-343eb8b2964b,Namespace:calico-system,Attempt:0,}" Jan 17 12:34:31.777756 containerd[1621]: time="2025-01-17T12:34:31.777682809Z" level=error msg="Failed to destroy network for sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.778796 containerd[1621]: time="2025-01-17T12:34:31.778757066Z" level=error msg="Failed to destroy network for sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.783839 containerd[1621]: time="2025-01-17T12:34:31.783578639Z" level=error msg="encountered an error cleaning up failed sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.785695 containerd[1621]: time="2025-01-17T12:34:31.784561750Z" level=error msg="encountered an error cleaning up failed sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.798958 containerd[1621]: time="2025-01-17T12:34:31.798904943Z" level=error msg="Failed to destroy network for sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.799987 containerd[1621]: time="2025-01-17T12:34:31.778154847Z" level=error msg="Failed to destroy network for sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.800316 containerd[1621]: time="2025-01-17T12:34:31.799990313Z" level=error msg="encountered an error cleaning up failed sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.800463 containerd[1621]: time="2025-01-17T12:34:31.800425671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7554c79d77-tz9dc,Uid:a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.803889 containerd[1621]: time="2025-01-17T12:34:31.803057812Z" level=error msg="encountered an error cleaning up failed sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.803889 containerd[1621]: time="2025-01-17T12:34:31.803132255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86cbc7c54b-hswjc,Uid:4980fedd-f173-462d-ae26-6b5775cf7947,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.807427 containerd[1621]: time="2025-01-17T12:34:31.806610598Z" level=error msg="Failed to destroy network for sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.807427 containerd[1621]: time="2025-01-17T12:34:31.807011849Z" level=error msg="encountered an error cleaning up failed sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.807427 containerd[1621]: time="2025-01-17T12:34:31.807056363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7qcnn,Uid:9c1443c2-e0fb-459e-a578-bd981d23c7b4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.807427 containerd[1621]: time="2025-01-17T12:34:31.800023842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kfzfn,Uid:31f2088e-c721-48df-8cef-797f0799b017,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.807427 containerd[1621]: time="2025-01-17T12:34:31.800033352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7554c79d77-w2z5v,Uid:53904d6f-b51c-4d8b-962c-f67a32e4ae5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.808286 kubelet[2929]: E0117 12:34:31.808230 2929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.808458 kubelet[2929]: E0117 12:34:31.808351 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7554c79d77-w2z5v" Jan 17 12:34:31.808458 kubelet[2929]: E0117 12:34:31.808405 2929 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7554c79d77-w2z5v" Jan 17 12:34:31.809064 kubelet[2929]: E0117 12:34:31.808504 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7554c79d77-w2z5v_calico-apiserver(53904d6f-b51c-4d8b-962c-f67a32e4ae5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7554c79d77-w2z5v_calico-apiserver(53904d6f-b51c-4d8b-962c-f67a32e4ae5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7554c79d77-w2z5v" podUID="53904d6f-b51c-4d8b-962c-f67a32e4ae5e" Jan 17 12:34:31.809064 kubelet[2929]: E0117 12:34:31.808694 2929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.809064 kubelet[2929]: E0117 12:34:31.808759 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86cbc7c54b-hswjc" Jan 17 12:34:31.809380 kubelet[2929]: E0117 12:34:31.808829 2929 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86cbc7c54b-hswjc" Jan 17 12:34:31.809380 kubelet[2929]: E0117 12:34:31.808920 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86cbc7c54b-hswjc_calico-system(4980fedd-f173-462d-ae26-6b5775cf7947)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86cbc7c54b-hswjc_calico-system(4980fedd-f173-462d-ae26-6b5775cf7947)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86cbc7c54b-hswjc" podUID="4980fedd-f173-462d-ae26-6b5775cf7947" Jan 17 12:34:31.809380 kubelet[2929]: E0117 12:34:31.808983 2929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.809661 kubelet[2929]: E0117 12:34:31.809026 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7554c79d77-tz9dc" Jan 17 12:34:31.809661 kubelet[2929]: E0117 12:34:31.809056 2929 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7554c79d77-tz9dc" Jan 17 12:34:31.809661 kubelet[2929]: E0117 12:34:31.809108 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7554c79d77-tz9dc_calico-apiserver(a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7554c79d77-tz9dc_calico-apiserver(a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7554c79d77-tz9dc" podUID="a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7" Jan 17 12:34:31.809930 kubelet[2929]: E0117 12:34:31.809169 2929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.809930 kubelet[2929]: E0117 12:34:31.809205 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kfzfn" Jan 17 12:34:31.809930 kubelet[2929]: E0117 12:34:31.809252 2929 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kfzfn" Jan 17 12:34:31.810121 kubelet[2929]: E0117 12:34:31.809356 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kfzfn_kube-system(31f2088e-c721-48df-8cef-797f0799b017)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kfzfn_kube-system(31f2088e-c721-48df-8cef-797f0799b017)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kfzfn" podUID="31f2088e-c721-48df-8cef-797f0799b017" Jan 17 12:34:31.810121 kubelet[2929]: E0117 12:34:31.809404 2929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.810121 kubelet[2929]: E0117 12:34:31.809447 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7qcnn" Jan 17 12:34:31.810744 kubelet[2929]: E0117 12:34:31.809506 2929 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7qcnn" Jan 17 12:34:31.810744 kubelet[2929]: E0117 12:34:31.809554 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-7qcnn_kube-system(9c1443c2-e0fb-459e-a578-bd981d23c7b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-7qcnn_kube-system(9c1443c2-e0fb-459e-a578-bd981d23c7b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7qcnn" podUID="9c1443c2-e0fb-459e-a578-bd981d23c7b4" Jan 17 12:34:31.865436 containerd[1621]: time="2025-01-17T12:34:31.865231602Z" level=error msg="Failed to destroy network for sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.866313 containerd[1621]: time="2025-01-17T12:34:31.865916610Z" level=error msg="encountered an error cleaning up failed sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.866313 containerd[1621]: time="2025-01-17T12:34:31.866001883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xk8nd,Uid:d743c195-9b4b-4bf7-a2e2-343eb8b2964b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.866645 kubelet[2929]: E0117 12:34:31.866593 2929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:31.866757 kubelet[2929]: E0117 12:34:31.866741 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xk8nd" Jan 17 12:34:31.866846 kubelet[2929]: E0117 12:34:31.866819 2929 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xk8nd" Jan 17 12:34:31.868298 kubelet[2929]: E0117 12:34:31.867001 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xk8nd_calico-system(d743c195-9b4b-4bf7-a2e2-343eb8b2964b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xk8nd_calico-system(d743c195-9b4b-4bf7-a2e2-343eb8b2964b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xk8nd" podUID="d743c195-9b4b-4bf7-a2e2-343eb8b2964b" Jan 17 12:34:31.933901 kubelet[2929]: I0117 12:34:31.933191 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:31.949023 containerd[1621]: time="2025-01-17T12:34:31.947627831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:34:31.955305 kubelet[2929]: I0117 12:34:31.955253 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:31.996769 kubelet[2929]: I0117 12:34:31.996729 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:32.003173 kubelet[2929]: I0117 12:34:32.001124 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:32.006738 kubelet[2929]: I0117 12:34:32.006700 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:32.012895 containerd[1621]: time="2025-01-17T12:34:32.012165353Z" level=info msg="StopPodSandbox for \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\"" Jan 17 12:34:32.014962 containerd[1621]: time="2025-01-17T12:34:32.013190541Z" level=info msg="StopPodSandbox for \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\"" Jan 17 12:34:32.014962 containerd[1621]: time="2025-01-17T12:34:32.014346690Z" level=info msg="Ensure that sandbox 0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50 in task-service has been cleanup successfully" Jan 17 12:34:32.015189 containerd[1621]: time="2025-01-17T12:34:32.015161333Z" level=info msg="Ensure that sandbox 24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e in task-service has been cleanup successfully" Jan 17 12:34:32.016205 containerd[1621]: time="2025-01-17T12:34:32.016172719Z" level=info msg="StopPodSandbox for \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\"" Jan 17 12:34:32.017312 containerd[1621]: time="2025-01-17T12:34:32.017222357Z" level=info msg="Ensure that sandbox a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023 in task-service has been cleanup successfully" Jan 17 12:34:32.017779 containerd[1621]: time="2025-01-17T12:34:32.017712578Z" level=info msg="StopPodSandbox for \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\"" Jan 17 12:34:32.018550 containerd[1621]: time="2025-01-17T12:34:32.018476553Z" level=info msg="Ensure that sandbox b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b in task-service has been cleanup successfully" Jan 17 12:34:32.019250 containerd[1621]: time="2025-01-17T12:34:32.019219753Z" level=info msg="StopPodSandbox for \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\"" Jan 17 12:34:32.020276 containerd[1621]: time="2025-01-17T12:34:32.020211651Z" level=info msg="Ensure that sandbox 003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d in task-service has been cleanup successfully" Jan 17 12:34:32.022947 kubelet[2929]: I0117 12:34:32.022917 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:32.027002 containerd[1621]: time="2025-01-17T12:34:32.026623384Z" level=info msg="StopPodSandbox for \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\"" Jan 17 12:34:32.033497 containerd[1621]: time="2025-01-17T12:34:32.033434407Z" level=info msg="Ensure that sandbox 511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa in task-service has been cleanup successfully" Jan 17 12:34:32.118649 containerd[1621]: time="2025-01-17T12:34:32.118412332Z" level=error msg="StopPodSandbox for \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\" failed" error="failed to destroy network for sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:32.119321 kubelet[2929]: E0117 12:34:32.119195 2929 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:32.133783 kubelet[2929]: E0117 12:34:32.133393 2929 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b"} Jan 17 12:34:32.133783 kubelet[2929]: E0117 12:34:32.133491 2929 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53904d6f-b51c-4d8b-962c-f67a32e4ae5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:34:32.133783 kubelet[2929]: E0117 12:34:32.133552 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53904d6f-b51c-4d8b-962c-f67a32e4ae5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7554c79d77-w2z5v" podUID="53904d6f-b51c-4d8b-962c-f67a32e4ae5e" Jan 17 12:34:32.136241 containerd[1621]: time="2025-01-17T12:34:32.136188151Z" level=error msg="StopPodSandbox for \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\" failed" error="failed to destroy network for sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:32.136494 kubelet[2929]: E0117 12:34:32.136470 2929 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:32.136494 kubelet[2929]: E0117 12:34:32.136513 2929 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa"} Jan 17 12:34:32.136657 kubelet[2929]: E0117 12:34:32.136558 2929 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:34:32.136657 kubelet[2929]: E0117 12:34:32.136607 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7554c79d77-tz9dc" podUID="a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7" Jan 17 12:34:32.159633 containerd[1621]: time="2025-01-17T12:34:32.159465743Z" level=error msg="StopPodSandbox for \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\" failed" error="failed to destroy network for sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:32.161783 kubelet[2929]: E0117 12:34:32.161639 2929 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:32.161783 kubelet[2929]: E0117 12:34:32.161702 2929 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023"} Jan 17 12:34:32.161974 kubelet[2929]: E0117 12:34:32.161788 2929 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c1443c2-e0fb-459e-a578-bd981d23c7b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:34:32.161974 kubelet[2929]: E0117 12:34:32.161849 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c1443c2-e0fb-459e-a578-bd981d23c7b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7qcnn" podUID="9c1443c2-e0fb-459e-a578-bd981d23c7b4" Jan 17 12:34:32.171574 containerd[1621]: time="2025-01-17T12:34:32.171490705Z" level=error msg="StopPodSandbox for \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\" failed" error="failed to destroy network for sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:32.171780 kubelet[2929]: E0117 12:34:32.171754 2929 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:32.171891 kubelet[2929]: E0117 12:34:32.171802 2929 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50"} Jan 17 12:34:32.171891 kubelet[2929]: E0117 12:34:32.171862 2929 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31f2088e-c721-48df-8cef-797f0799b017\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:34:32.172039 kubelet[2929]: E0117 12:34:32.171902 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31f2088e-c721-48df-8cef-797f0799b017\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kfzfn" podUID="31f2088e-c721-48df-8cef-797f0799b017" Jan 17 12:34:32.172764 kubelet[2929]: E0117 12:34:32.172439 2929 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:32.172764 kubelet[2929]: E0117 12:34:32.172471 2929 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d"} Jan 17 12:34:32.172764 kubelet[2929]: E0117 12:34:32.172516 2929 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d743c195-9b4b-4bf7-a2e2-343eb8b2964b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:34:32.173087 containerd[1621]: time="2025-01-17T12:34:32.172161326Z" level=error msg="StopPodSandbox for \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\" failed" error="failed to destroy network for sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:32.173222 kubelet[2929]: E0117 12:34:32.172556 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d743c195-9b4b-4bf7-a2e2-343eb8b2964b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xk8nd" podUID="d743c195-9b4b-4bf7-a2e2-343eb8b2964b" Jan 17 12:34:32.173536 containerd[1621]: time="2025-01-17T12:34:32.173490363Z" level=error msg="StopPodSandbox for \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\" failed" error="failed to destroy network for sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:34:32.173939 kubelet[2929]: E0117 12:34:32.173702 2929 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:32.173939 kubelet[2929]: E0117 12:34:32.173743 2929 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e"} Jan 17 12:34:32.173939 kubelet[2929]: E0117 12:34:32.173794 2929 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4980fedd-f173-462d-ae26-6b5775cf7947\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:34:32.173939 kubelet[2929]: E0117 12:34:32.173845 2929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4980fedd-f173-462d-ae26-6b5775cf7947\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86cbc7c54b-hswjc" podUID="4980fedd-f173-462d-ae26-6b5775cf7947" Jan 17 12:34:41.876878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697220250.mount: Deactivated successfully. Jan 17 12:34:41.988139 containerd[1621]: time="2025-01-17T12:34:41.980812450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:41.989394 containerd[1621]: time="2025-01-17T12:34:41.989139023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:34:41.998532 containerd[1621]: time="2025-01-17T12:34:41.998448119Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:42.000014 containerd[1621]: time="2025-01-17T12:34:41.999727260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.052018871s" Jan 17 12:34:42.000014 containerd[1621]: time="2025-01-17T12:34:41.999789814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:34:42.000556 containerd[1621]: time="2025-01-17T12:34:42.000521906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:42.131770 containerd[1621]: time="2025-01-17T12:34:42.131252676Z" level=info msg="CreateContainer within sandbox \"3f17fc2939011577a91b04fcffebdf626cf5d9f274f7124696d996d4239e05e8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:34:42.227633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880294995.mount: Deactivated successfully. Jan 17 12:34:42.257529 containerd[1621]: time="2025-01-17T12:34:42.257395984Z" level=info msg="CreateContainer within sandbox \"3f17fc2939011577a91b04fcffebdf626cf5d9f274f7124696d996d4239e05e8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"33ad3950751d79c174150f50367b692701968be0b5ddd0e672652c8e6645d84b\"" Jan 17 12:34:42.262767 containerd[1621]: time="2025-01-17T12:34:42.262724420Z" level=info msg="StartContainer for \"33ad3950751d79c174150f50367b692701968be0b5ddd0e672652c8e6645d84b\"" Jan 17 12:34:42.541714 containerd[1621]: time="2025-01-17T12:34:42.540986344Z" level=info msg="StartContainer for \"33ad3950751d79c174150f50367b692701968be0b5ddd0e672652c8e6645d84b\" returns successfully" Jan 17 12:34:42.679383 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:34:42.680314 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:34:43.166541 kubelet[2929]: I0117 12:34:43.166483 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-qpfg4" podStartSLOduration=2.613909402 podStartE2EDuration="27.126414216s" podCreationTimestamp="2025-01-17 12:34:16 +0000 UTC" firstStartedPulling="2025-01-17 12:34:17.488464899 +0000 UTC m=+24.066315554" lastFinishedPulling="2025-01-17 12:34:42.000969686 +0000 UTC m=+48.578820368" observedRunningTime="2025-01-17 12:34:43.116752109 +0000 UTC m=+49.694602780" watchObservedRunningTime="2025-01-17 12:34:43.126414216 +0000 UTC m=+49.704264889" Jan 17 12:34:43.701447 containerd[1621]: time="2025-01-17T12:34:43.700331252Z" level=info msg="StopPodSandbox for \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\"" Jan 17 12:34:43.702149 containerd[1621]: time="2025-01-17T12:34:43.701816883Z" level=info msg="StopPodSandbox for \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\"" Jan 17 12:34:43.766542 systemd-resolved[1514]: Under memory pressure, flushing caches. Jan 17 12:34:43.771748 systemd-journald[1185]: Under memory pressure, flushing caches. Jan 17 12:34:43.766779 systemd-resolved[1514]: Flushed all caches. Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:43.816 [INFO][4087] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:43.820 [INFO][4087] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" iface="eth0" netns="/var/run/netns/cni-775b3882-14e7-313f-1389-03f6331bee7f" Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:43.821 [INFO][4087] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" iface="eth0" netns="/var/run/netns/cni-775b3882-14e7-313f-1389-03f6331bee7f" Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:43.823 [INFO][4087] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" iface="eth0" netns="/var/run/netns/cni-775b3882-14e7-313f-1389-03f6331bee7f" Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:43.823 [INFO][4087] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:43.823 [INFO][4087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:43.993 [INFO][4101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" HandleID="k8s-pod-network.511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:43.996 [INFO][4101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:43.997 [INFO][4101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:44.013 [WARNING][4101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" HandleID="k8s-pod-network.511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:44.014 [INFO][4101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" HandleID="k8s-pod-network.511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:44.016 [INFO][4101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:44.023760 containerd[1621]: 2025-01-17 12:34:44.018 [INFO][4087] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:44.025758 containerd[1621]: time="2025-01-17T12:34:44.024896920Z" level=info msg="TearDown network for sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\" successfully" Jan 17 12:34:44.025758 containerd[1621]: time="2025-01-17T12:34:44.024940356Z" level=info msg="StopPodSandbox for \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\" returns successfully" Jan 17 12:34:44.030545 containerd[1621]: time="2025-01-17T12:34:44.028658218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7554c79d77-tz9dc,Uid:a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:34:44.029878 systemd[1]: run-netns-cni\x2d775b3882\x2d14e7\x2d313f\x2d1389\x2d03f6331bee7f.mount: Deactivated successfully. Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:43.817 [INFO][4088] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:43.821 [INFO][4088] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" iface="eth0" netns="/var/run/netns/cni-99c54a57-eb28-a109-adb0-3b8948616ef6" Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:43.821 [INFO][4088] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" iface="eth0" netns="/var/run/netns/cni-99c54a57-eb28-a109-adb0-3b8948616ef6" Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:43.822 [INFO][4088] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" iface="eth0" netns="/var/run/netns/cni-99c54a57-eb28-a109-adb0-3b8948616ef6" Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:43.823 [INFO][4088] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:43.823 [INFO][4088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:43.994 [INFO][4100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" HandleID="k8s-pod-network.b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:43.996 [INFO][4100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:44.016 [INFO][4100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:44.035 [WARNING][4100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" HandleID="k8s-pod-network.b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:44.035 [INFO][4100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" HandleID="k8s-pod-network.b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:44.038 [INFO][4100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:44.046791 containerd[1621]: 2025-01-17 12:34:44.041 [INFO][4088] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:44.050712 containerd[1621]: time="2025-01-17T12:34:44.050427689Z" level=info msg="TearDown network for sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\" successfully" Jan 17 12:34:44.050712 containerd[1621]: time="2025-01-17T12:34:44.050499195Z" level=info msg="StopPodSandbox for \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\" returns successfully" Jan 17 12:34:44.055815 systemd[1]: run-netns-cni\x2d99c54a57\x2deb28\x2da109\x2dadb0\x2d3b8948616ef6.mount: Deactivated successfully. Jan 17 12:34:44.069770 containerd[1621]: time="2025-01-17T12:34:44.069645958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7554c79d77-w2z5v,Uid:53904d6f-b51c-4d8b-962c-f67a32e4ae5e,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:34:44.317246 systemd-networkd[1263]: calid70ec7e51f2: Link UP Jan 17 12:34:44.321851 systemd-networkd[1263]: calid70ec7e51f2: Gained carrier Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.149 [INFO][4113] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.163 [INFO][4113] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0 calico-apiserver-7554c79d77- calico-apiserver a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7 786 0 2025-01-17 12:34:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7554c79d77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-hkhka.gb1.brightbox.com calico-apiserver-7554c79d77-tz9dc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid70ec7e51f2 [] []}} ContainerID="b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-tz9dc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.164 [INFO][4113] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-tz9dc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.215 [INFO][4136] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" HandleID="k8s-pod-network.b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.229 [INFO][4136] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" HandleID="k8s-pod-network.b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000513c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-hkhka.gb1.brightbox.com", "pod":"calico-apiserver-7554c79d77-tz9dc", "timestamp":"2025-01-17 12:34:44.215024957 +0000 UTC"}, Hostname:"srv-hkhka.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.229 [INFO][4136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.229 [INFO][4136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.230 [INFO][4136] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hkhka.gb1.brightbox.com' Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.232 [INFO][4136] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.241 [INFO][4136] ipam/ipam.go 372: Looking up existing affinities for host host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.248 [INFO][4136] ipam/ipam.go 489: Trying affinity for 192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.252 [INFO][4136] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.256 [INFO][4136] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.256 [INFO][4136] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.258 [INFO][4136] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9 Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.264 [INFO][4136] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.271 [INFO][4136] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.193/26] block=192.168.13.192/26 handle="k8s-pod-network.b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.271 [INFO][4136] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.193/26] handle="k8s-pod-network.b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.271 [INFO][4136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:44.401334 containerd[1621]: 2025-01-17 12:34:44.271 [INFO][4136] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.193/26] IPv6=[] ContainerID="b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" HandleID="k8s-pod-network.b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:44.407426 containerd[1621]: 2025-01-17 12:34:44.280 [INFO][4113] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-tz9dc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0", GenerateName:"calico-apiserver-7554c79d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7554c79d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7554c79d77-tz9dc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid70ec7e51f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:44.407426 containerd[1621]: 2025-01-17 12:34:44.280 [INFO][4113] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.193/32] ContainerID="b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-tz9dc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:44.407426 containerd[1621]: 2025-01-17 12:34:44.281 [INFO][4113] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid70ec7e51f2 ContainerID="b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-tz9dc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:44.407426 containerd[1621]: 2025-01-17 12:34:44.331 [INFO][4113] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-tz9dc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:44.407426 containerd[1621]: 2025-01-17 12:34:44.332 [INFO][4113] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-tz9dc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0", GenerateName:"calico-apiserver-7554c79d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7554c79d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9", Pod:"calico-apiserver-7554c79d77-tz9dc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid70ec7e51f2", MAC:"ca:74:af:ac:62:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:44.407426 containerd[1621]: 2025-01-17 12:34:44.391 [INFO][4113] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-tz9dc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:44.401499 systemd-networkd[1263]: calif875724c2f1: Link UP Jan 17 12:34:44.402589 systemd-networkd[1263]: calif875724c2f1: Gained carrier Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.148 [INFO][4122] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.164 [INFO][4122] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0 calico-apiserver-7554c79d77- calico-apiserver 53904d6f-b51c-4d8b-962c-f67a32e4ae5e 787 0 2025-01-17 12:34:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7554c79d77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-hkhka.gb1.brightbox.com calico-apiserver-7554c79d77-w2z5v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif875724c2f1 [] []}} ContainerID="78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-w2z5v" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.164 [INFO][4122] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-w2z5v" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.218 [INFO][4135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" HandleID="k8s-pod-network.78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.231 [INFO][4135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" HandleID="k8s-pod-network.78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ac6c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-hkhka.gb1.brightbox.com", "pod":"calico-apiserver-7554c79d77-w2z5v", "timestamp":"2025-01-17 12:34:44.21873207 +0000 UTC"}, Hostname:"srv-hkhka.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.231 [INFO][4135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.272 [INFO][4135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.272 [INFO][4135] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hkhka.gb1.brightbox.com' Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.278 [INFO][4135] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.289 [INFO][4135] ipam/ipam.go 372: Looking up existing affinities for host host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.310 [INFO][4135] ipam/ipam.go 489: Trying affinity for 192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.316 [INFO][4135] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.320 [INFO][4135] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.320 [INFO][4135] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.327 [INFO][4135] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.348 [INFO][4135] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.381 [INFO][4135] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.194/26] block=192.168.13.192/26 handle="k8s-pod-network.78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.381 [INFO][4135] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.194/26] handle="k8s-pod-network.78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.381 [INFO][4135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:44.429515 containerd[1621]: 2025-01-17 12:34:44.381 [INFO][4135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.194/26] IPv6=[] ContainerID="78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" HandleID="k8s-pod-network.78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:44.431969 containerd[1621]: 2025-01-17 12:34:44.392 [INFO][4122] cni-plugin/k8s.go 386: Populated endpoint ContainerID="78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-w2z5v" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0", GenerateName:"calico-apiserver-7554c79d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"53904d6f-b51c-4d8b-962c-f67a32e4ae5e", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7554c79d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7554c79d77-w2z5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif875724c2f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:44.431969 containerd[1621]: 2025-01-17 12:34:44.393 [INFO][4122] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.194/32] ContainerID="78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-w2z5v" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:44.431969 containerd[1621]: 2025-01-17 12:34:44.394 [INFO][4122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif875724c2f1 ContainerID="78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-w2z5v" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:44.431969 containerd[1621]: 2025-01-17 12:34:44.405 [INFO][4122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-w2z5v" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:44.431969 containerd[1621]: 2025-01-17 12:34:44.407 [INFO][4122] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-w2z5v" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0", GenerateName:"calico-apiserver-7554c79d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"53904d6f-b51c-4d8b-962c-f67a32e4ae5e", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7554c79d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda", Pod:"calico-apiserver-7554c79d77-w2z5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif875724c2f1", MAC:"f6:2c:96:f3:40:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:44.431969 containerd[1621]: 2025-01-17 12:34:44.422 [INFO][4122] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda" Namespace="calico-apiserver" Pod="calico-apiserver-7554c79d77-w2z5v" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:44.485483 containerd[1621]: time="2025-01-17T12:34:44.485334423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:34:44.485747 containerd[1621]: time="2025-01-17T12:34:44.485433331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:34:44.485747 containerd[1621]: time="2025-01-17T12:34:44.485466915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:44.485747 containerd[1621]: time="2025-01-17T12:34:44.485633716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:44.559841 containerd[1621]: time="2025-01-17T12:34:44.548889940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:34:44.559841 containerd[1621]: time="2025-01-17T12:34:44.549039495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:34:44.559841 containerd[1621]: time="2025-01-17T12:34:44.549074033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:44.559841 containerd[1621]: time="2025-01-17T12:34:44.549271385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:44.806982 containerd[1621]: time="2025-01-17T12:34:44.805735804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7554c79d77-w2z5v,Uid:53904d6f-b51c-4d8b-962c-f67a32e4ae5e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda\"" Jan 17 12:34:44.838994 containerd[1621]: time="2025-01-17T12:34:44.838515905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:34:44.926717 containerd[1621]: time="2025-01-17T12:34:44.926640531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7554c79d77-tz9dc,Uid:a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9\"" Jan 17 12:34:45.163316 kernel: bpftool[4376]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:34:45.504793 systemd-networkd[1263]: vxlan.calico: Link UP Jan 17 12:34:45.504808 systemd-networkd[1263]: vxlan.calico: Gained carrier Jan 17 12:34:45.682866 containerd[1621]: time="2025-01-17T12:34:45.682804071Z" level=info msg="StopPodSandbox for \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\"" Jan 17 12:34:45.685021 containerd[1621]: time="2025-01-17T12:34:45.684979751Z" level=info msg="StopPodSandbox for \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\"" Jan 17 12:34:45.687511 systemd-networkd[1263]: calid70ec7e51f2: Gained IPv6LL Jan 17 12:34:45.694350 containerd[1621]: time="2025-01-17T12:34:45.694310753Z" level=info msg="StopPodSandbox for \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\"" Jan 17 12:34:46.114316 kubelet[2929]: I0117 12:34:46.110694 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:45.931 [INFO][4470] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:45.932 [INFO][4470] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" iface="eth0" netns="/var/run/netns/cni-98c98995-0da4-fcd6-858b-be5e4eefdbff" Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:45.932 [INFO][4470] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" iface="eth0" netns="/var/run/netns/cni-98c98995-0da4-fcd6-858b-be5e4eefdbff" Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:45.932 [INFO][4470] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" iface="eth0" netns="/var/run/netns/cni-98c98995-0da4-fcd6-858b-be5e4eefdbff" Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:45.932 [INFO][4470] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:45.932 [INFO][4470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:46.101 [INFO][4501] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" HandleID="k8s-pod-network.24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:46.101 [INFO][4501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:46.101 [INFO][4501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:46.118 [WARNING][4501] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" HandleID="k8s-pod-network.24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:46.118 [INFO][4501] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" HandleID="k8s-pod-network.24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:46.123 [INFO][4501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:46.149368 containerd[1621]: 2025-01-17 12:34:46.128 [INFO][4470] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:46.152017 systemd[1]: run-netns-cni\x2d98c98995\x2d0da4\x2dfcd6\x2d858b\x2dbe5e4eefdbff.mount: Deactivated successfully. Jan 17 12:34:46.157768 containerd[1621]: time="2025-01-17T12:34:46.156402591Z" level=info msg="TearDown network for sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\" successfully" Jan 17 12:34:46.157768 containerd[1621]: time="2025-01-17T12:34:46.156449167Z" level=info msg="StopPodSandbox for \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\" returns successfully" Jan 17 12:34:46.162780 containerd[1621]: time="2025-01-17T12:34:46.160164298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86cbc7c54b-hswjc,Uid:4980fedd-f173-462d-ae26-6b5775cf7947,Namespace:calico-system,Attempt:1,}" Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:45.889 [INFO][4445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:45.893 [INFO][4445] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" iface="eth0" netns="/var/run/netns/cni-0f11278f-4433-7ae1-28c9-31f71aff206c" Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:45.894 [INFO][4445] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" iface="eth0" netns="/var/run/netns/cni-0f11278f-4433-7ae1-28c9-31f71aff206c" Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:45.896 [INFO][4445] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" iface="eth0" netns="/var/run/netns/cni-0f11278f-4433-7ae1-28c9-31f71aff206c" Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:45.896 [INFO][4445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:45.896 [INFO][4445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:46.198 [INFO][4492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" HandleID="k8s-pod-network.a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:46.199 [INFO][4492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:46.199 [INFO][4492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:46.220 [WARNING][4492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" HandleID="k8s-pod-network.a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:46.220 [INFO][4492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" HandleID="k8s-pod-network.a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:46.233 [INFO][4492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:46.272839 containerd[1621]: 2025-01-17 12:34:46.259 [INFO][4445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:46.286320 containerd[1621]: time="2025-01-17T12:34:46.277028920Z" level=info msg="TearDown network for sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\" successfully" Jan 17 12:34:46.286320 containerd[1621]: time="2025-01-17T12:34:46.281623018Z" level=info msg="StopPodSandbox for \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\" returns successfully" Jan 17 12:34:46.286320 containerd[1621]: time="2025-01-17T12:34:46.285623531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7qcnn,Uid:9c1443c2-e0fb-459e-a578-bd981d23c7b4,Namespace:kube-system,Attempt:1,}" Jan 17 12:34:46.288905 systemd[1]: run-netns-cni\x2d0f11278f\x2d4433\x2d7ae1\x2d28c9\x2d31f71aff206c.mount: Deactivated successfully. Jan 17 12:34:46.325566 systemd-networkd[1263]: calif875724c2f1: Gained IPv6LL Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:45.953 [INFO][4466] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:45.954 [INFO][4466] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" iface="eth0" netns="/var/run/netns/cni-5d35af09-cf7e-9e3b-9fb7-ca42b8476a4d" Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:45.957 [INFO][4466] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" iface="eth0" netns="/var/run/netns/cni-5d35af09-cf7e-9e3b-9fb7-ca42b8476a4d" Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:45.969 [INFO][4466] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" iface="eth0" netns="/var/run/netns/cni-5d35af09-cf7e-9e3b-9fb7-ca42b8476a4d" Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:45.969 [INFO][4466] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:45.969 [INFO][4466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:46.246 [INFO][4509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" HandleID="k8s-pod-network.0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:46.257 [INFO][4509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:46.260 [INFO][4509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:46.302 [WARNING][4509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" HandleID="k8s-pod-network.0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:46.302 [INFO][4509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" HandleID="k8s-pod-network.0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:46.308 [INFO][4509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:46.327559 containerd[1621]: 2025-01-17 12:34:46.311 [INFO][4466] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:46.337348 containerd[1621]: time="2025-01-17T12:34:46.337293942Z" level=info msg="TearDown network for sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\" successfully" Jan 17 12:34:46.337531 containerd[1621]: time="2025-01-17T12:34:46.337503907Z" level=info msg="StopPodSandbox for \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\" returns successfully" Jan 17 12:34:46.338980 containerd[1621]: time="2025-01-17T12:34:46.338920405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kfzfn,Uid:31f2088e-c721-48df-8cef-797f0799b017,Namespace:kube-system,Attempt:1,}" Jan 17 12:34:46.342703 systemd[1]: run-netns-cni\x2d5d35af09\x2dcf7e\x2d9e3b\x2d9fb7\x2dca42b8476a4d.mount: Deactivated successfully. Jan 17 12:34:46.914054 systemd-networkd[1263]: calic5845672440: Link UP Jan 17 12:34:46.918314 systemd-networkd[1263]: calic5845672440: Gained carrier Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.479 [INFO][4536] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0 calico-kube-controllers-86cbc7c54b- calico-system 4980fedd-f173-462d-ae26-6b5775cf7947 804 0 2025-01-17 12:34:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86cbc7c54b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-hkhka.gb1.brightbox.com calico-kube-controllers-86cbc7c54b-hswjc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic5845672440 [] []}} ContainerID="51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" Namespace="calico-system" Pod="calico-kube-controllers-86cbc7c54b-hswjc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.480 [INFO][4536] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" Namespace="calico-system" Pod="calico-kube-controllers-86cbc7c54b-hswjc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.673 [INFO][4590] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" HandleID="k8s-pod-network.51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.711 [INFO][4590] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" HandleID="k8s-pod-network.51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000392e60), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hkhka.gb1.brightbox.com", "pod":"calico-kube-controllers-86cbc7c54b-hswjc", "timestamp":"2025-01-17 12:34:46.673567947 +0000 UTC"}, Hostname:"srv-hkhka.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.711 [INFO][4590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.719 [INFO][4590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.719 [INFO][4590] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hkhka.gb1.brightbox.com' Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.731 [INFO][4590] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.759 [INFO][4590] ipam/ipam.go 372: Looking up existing affinities for host host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.776 [INFO][4590] ipam/ipam.go 489: Trying affinity for 192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.782 [INFO][4590] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.793 [INFO][4590] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.794 [INFO][4590] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.797 [INFO][4590] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.810 [INFO][4590] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.828 [INFO][4590] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.195/26] block=192.168.13.192/26 handle="k8s-pod-network.51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.828 [INFO][4590] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.195/26] handle="k8s-pod-network.51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.829 [INFO][4590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:46.982403 containerd[1621]: 2025-01-17 12:34:46.829 [INFO][4590] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.195/26] IPv6=[] ContainerID="51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" HandleID="k8s-pod-network.51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:46.988330 containerd[1621]: 2025-01-17 12:34:46.854 [INFO][4536] cni-plugin/k8s.go 386: Populated endpoint ContainerID="51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" Namespace="calico-system" Pod="calico-kube-controllers-86cbc7c54b-hswjc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0", GenerateName:"calico-kube-controllers-86cbc7c54b-", Namespace:"calico-system", SelfLink:"", UID:"4980fedd-f173-462d-ae26-6b5775cf7947", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86cbc7c54b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-86cbc7c54b-hswjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5845672440", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:46.988330 containerd[1621]: 2025-01-17 12:34:46.856 [INFO][4536] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.195/32] ContainerID="51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" Namespace="calico-system" Pod="calico-kube-controllers-86cbc7c54b-hswjc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:46.988330 containerd[1621]: 2025-01-17 12:34:46.856 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5845672440 ContainerID="51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" Namespace="calico-system" Pod="calico-kube-controllers-86cbc7c54b-hswjc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:46.988330 containerd[1621]: 2025-01-17 12:34:46.921 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" Namespace="calico-system" Pod="calico-kube-controllers-86cbc7c54b-hswjc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:46.988330 containerd[1621]: 2025-01-17 12:34:46.934 [INFO][4536] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" Namespace="calico-system" Pod="calico-kube-controllers-86cbc7c54b-hswjc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0", GenerateName:"calico-kube-controllers-86cbc7c54b-", Namespace:"calico-system", SelfLink:"", UID:"4980fedd-f173-462d-ae26-6b5775cf7947", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86cbc7c54b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f", Pod:"calico-kube-controllers-86cbc7c54b-hswjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5845672440", MAC:"82:64:c7:f2:20:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:46.988330 containerd[1621]: 2025-01-17 12:34:46.969 [INFO][4536] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f" Namespace="calico-system" Pod="calico-kube-controllers-86cbc7c54b-hswjc" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:47.108960 systemd-networkd[1263]: calif024c139bb6: Link UP Jan 17 12:34:47.109647 systemd-networkd[1263]: calif024c139bb6: Gained carrier Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:46.560 [INFO][4575] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0 coredns-76f75df574- kube-system 31f2088e-c721-48df-8cef-797f0799b017 805 0 2025-01-17 12:34:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-hkhka.gb1.brightbox.com coredns-76f75df574-kfzfn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif024c139bb6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" Namespace="kube-system" Pod="coredns-76f75df574-kfzfn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:46.561 [INFO][4575] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" Namespace="kube-system" Pod="coredns-76f75df574-kfzfn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:46.849 [INFO][4599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" HandleID="k8s-pod-network.669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:46.932 [INFO][4599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" HandleID="k8s-pod-network.669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040d590), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-hkhka.gb1.brightbox.com", "pod":"coredns-76f75df574-kfzfn", "timestamp":"2025-01-17 12:34:46.849828597 +0000 UTC"}, Hostname:"srv-hkhka.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:46.934 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:46.934 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:46.934 [INFO][4599] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hkhka.gb1.brightbox.com' Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:46.952 [INFO][4599] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:46.972 [INFO][4599] ipam/ipam.go 372: Looking up existing affinities for host host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:47.005 [INFO][4599] ipam/ipam.go 489: Trying affinity for 192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:47.010 [INFO][4599] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:47.020 [INFO][4599] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:47.021 [INFO][4599] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:47.027 [INFO][4599] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29 Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:47.042 [INFO][4599] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:47.056 [INFO][4599] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.196/26] block=192.168.13.192/26 handle="k8s-pod-network.669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:47.062 [INFO][4599] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.196/26] handle="k8s-pod-network.669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:47.062 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:47.175595 containerd[1621]: 2025-01-17 12:34:47.062 [INFO][4599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.196/26] IPv6=[] ContainerID="669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" HandleID="k8s-pod-network.669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:47.181176 containerd[1621]: 2025-01-17 12:34:47.096 [INFO][4575] cni-plugin/k8s.go 386: Populated endpoint ContainerID="669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" Namespace="kube-system" Pod="coredns-76f75df574-kfzfn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"31f2088e-c721-48df-8cef-797f0799b017", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-kfzfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif024c139bb6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:47.181176 containerd[1621]: 2025-01-17 12:34:47.098 [INFO][4575] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.196/32] ContainerID="669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" Namespace="kube-system" Pod="coredns-76f75df574-kfzfn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:47.181176 containerd[1621]: 2025-01-17 12:34:47.103 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif024c139bb6 ContainerID="669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" Namespace="kube-system" Pod="coredns-76f75df574-kfzfn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:47.181176 containerd[1621]: 2025-01-17 12:34:47.110 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" Namespace="kube-system" Pod="coredns-76f75df574-kfzfn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:47.181176 containerd[1621]: 2025-01-17 12:34:47.117 [INFO][4575] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" Namespace="kube-system" Pod="coredns-76f75df574-kfzfn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"31f2088e-c721-48df-8cef-797f0799b017", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29", Pod:"coredns-76f75df574-kfzfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif024c139bb6", MAC:"76:77:58:f7:82:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:47.181176 containerd[1621]: 2025-01-17 12:34:47.159 [INFO][4575] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29" Namespace="kube-system" Pod="coredns-76f75df574-kfzfn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:47.201539 containerd[1621]: time="2025-01-17T12:34:47.193450859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:34:47.201539 containerd[1621]: time="2025-01-17T12:34:47.193565855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:34:47.201539 containerd[1621]: time="2025-01-17T12:34:47.193614670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:47.201539 containerd[1621]: time="2025-01-17T12:34:47.193804660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:47.226205 systemd-networkd[1263]: vxlan.calico: Gained IPv6LL Jan 17 12:34:47.239045 systemd-networkd[1263]: cali231cd87e8e4: Link UP Jan 17 12:34:47.245094 systemd-networkd[1263]: cali231cd87e8e4: Gained carrier Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:46.573 [INFO][4565] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0 coredns-76f75df574- kube-system 9c1443c2-e0fb-459e-a578-bd981d23c7b4 803 0 2025-01-17 12:34:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-hkhka.gb1.brightbox.com coredns-76f75df574-7qcnn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali231cd87e8e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" Namespace="kube-system" Pod="coredns-76f75df574-7qcnn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:46.573 [INFO][4565] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" Namespace="kube-system" Pod="coredns-76f75df574-7qcnn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:46.923 [INFO][4604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" HandleID="k8s-pod-network.86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:46.952 [INFO][4604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" HandleID="k8s-pod-network.86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b9480), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-hkhka.gb1.brightbox.com", "pod":"coredns-76f75df574-7qcnn", "timestamp":"2025-01-17 12:34:46.923902105 +0000 UTC"}, Hostname:"srv-hkhka.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:46.952 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.062 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.063 [INFO][4604] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hkhka.gb1.brightbox.com' Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.067 [INFO][4604] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.079 [INFO][4604] ipam/ipam.go 372: Looking up existing affinities for host host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.090 [INFO][4604] ipam/ipam.go 489: Trying affinity for 192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.095 [INFO][4604] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.114 [INFO][4604] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.116 [INFO][4604] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.123 [INFO][4604] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.149 [INFO][4604] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.201 [INFO][4604] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.197/26] block=192.168.13.192/26 handle="k8s-pod-network.86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.201 [INFO][4604] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.197/26] handle="k8s-pod-network.86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.201 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:47.309550 containerd[1621]: 2025-01-17 12:34:47.209 [INFO][4604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.197/26] IPv6=[] ContainerID="86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" HandleID="k8s-pod-network.86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:47.311973 containerd[1621]: 2025-01-17 12:34:47.213 [INFO][4565] cni-plugin/k8s.go 386: Populated endpoint ContainerID="86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" Namespace="kube-system" Pod="coredns-76f75df574-7qcnn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9c1443c2-e0fb-459e-a578-bd981d23c7b4", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-7qcnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali231cd87e8e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:47.311973 containerd[1621]: 2025-01-17 12:34:47.214 [INFO][4565] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.197/32] ContainerID="86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" Namespace="kube-system" Pod="coredns-76f75df574-7qcnn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:47.311973 containerd[1621]: 2025-01-17 12:34:47.214 [INFO][4565] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali231cd87e8e4 ContainerID="86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" Namespace="kube-system" Pod="coredns-76f75df574-7qcnn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:47.311973 containerd[1621]: 2025-01-17 12:34:47.247 [INFO][4565] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" Namespace="kube-system" Pod="coredns-76f75df574-7qcnn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:47.311973 containerd[1621]: 2025-01-17 12:34:47.250 [INFO][4565] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" Namespace="kube-system" Pod="coredns-76f75df574-7qcnn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9c1443c2-e0fb-459e-a578-bd981d23c7b4", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e", Pod:"coredns-76f75df574-7qcnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali231cd87e8e4", MAC:"12:2a:5e:bf:f8:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:47.311973 containerd[1621]: 2025-01-17 12:34:47.267 [INFO][4565] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e" Namespace="kube-system" Pod="coredns-76f75df574-7qcnn" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:47.438493 containerd[1621]: time="2025-01-17T12:34:47.426120352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:34:47.438493 containerd[1621]: time="2025-01-17T12:34:47.426216603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:34:47.438493 containerd[1621]: time="2025-01-17T12:34:47.426233969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:47.438493 containerd[1621]: time="2025-01-17T12:34:47.426371731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:47.479350 containerd[1621]: time="2025-01-17T12:34:47.478444482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:34:47.479350 containerd[1621]: time="2025-01-17T12:34:47.478537061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:34:47.479350 containerd[1621]: time="2025-01-17T12:34:47.478557419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:47.479350 containerd[1621]: time="2025-01-17T12:34:47.478707404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:47.569145 containerd[1621]: time="2025-01-17T12:34:47.569078387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86cbc7c54b-hswjc,Uid:4980fedd-f173-462d-ae26-6b5775cf7947,Namespace:calico-system,Attempt:1,} returns sandbox id \"51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f\"" Jan 17 12:34:47.604223 containerd[1621]: time="2025-01-17T12:34:47.602949280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7qcnn,Uid:9c1443c2-e0fb-459e-a578-bd981d23c7b4,Namespace:kube-system,Attempt:1,} returns sandbox id \"86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e\"" Jan 17 12:34:47.637572 containerd[1621]: time="2025-01-17T12:34:47.636566955Z" level=info msg="CreateContainer within sandbox \"86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:34:47.698236 containerd[1621]: time="2025-01-17T12:34:47.698022757Z" level=info msg="StopPodSandbox for \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\"" Jan 17 12:34:47.705832 containerd[1621]: time="2025-01-17T12:34:47.705232559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kfzfn,Uid:31f2088e-c721-48df-8cef-797f0799b017,Namespace:kube-system,Attempt:1,} returns sandbox id \"669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29\"" Jan 17 12:34:47.742226 containerd[1621]: time="2025-01-17T12:34:47.741797880Z" level=info msg="CreateContainer within sandbox \"669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:34:47.745874 containerd[1621]: time="2025-01-17T12:34:47.745763348Z" level=info msg="CreateContainer within sandbox \"86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d78a0ba3f3f7c1326643c0993e624343f0336db8f9fc9eb3a56dd031760a2392\"" Jan 17 12:34:47.748686 containerd[1621]: time="2025-01-17T12:34:47.748545678Z" level=info msg="StartContainer for \"d78a0ba3f3f7c1326643c0993e624343f0336db8f9fc9eb3a56dd031760a2392\"" Jan 17 12:34:47.779185 containerd[1621]: time="2025-01-17T12:34:47.778977074Z" level=info msg="CreateContainer within sandbox \"669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6796e75660d1ef93b5a57db732ba206a46602fe67bd4d787ad973e13b4401725\"" Jan 17 12:34:47.782441 containerd[1621]: time="2025-01-17T12:34:47.782147684Z" level=info msg="StartContainer for \"6796e75660d1ef93b5a57db732ba206a46602fe67bd4d787ad973e13b4401725\"" Jan 17 12:34:47.956028 containerd[1621]: time="2025-01-17T12:34:47.954502291Z" level=info msg="StartContainer for \"6796e75660d1ef93b5a57db732ba206a46602fe67bd4d787ad973e13b4401725\" returns successfully" Jan 17 12:34:47.958425 containerd[1621]: time="2025-01-17T12:34:47.958319037Z" level=info msg="StartContainer for \"d78a0ba3f3f7c1326643c0993e624343f0336db8f9fc9eb3a56dd031760a2392\" returns successfully" Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:47.889 [INFO][4814] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:47.889 [INFO][4814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" iface="eth0" netns="/var/run/netns/cni-332660cf-b718-f132-822f-415021608a08" Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:47.890 [INFO][4814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" iface="eth0" netns="/var/run/netns/cni-332660cf-b718-f132-822f-415021608a08" Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:47.893 [INFO][4814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" iface="eth0" netns="/var/run/netns/cni-332660cf-b718-f132-822f-415021608a08" Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:47.893 [INFO][4814] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:47.893 [INFO][4814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:48.006 [INFO][4871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" HandleID="k8s-pod-network.003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:48.007 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:48.007 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:48.021 [WARNING][4871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" HandleID="k8s-pod-network.003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:48.021 [INFO][4871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" HandleID="k8s-pod-network.003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:48.027 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:48.034122 containerd[1621]: 2025-01-17 12:34:48.031 [INFO][4814] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:48.036800 containerd[1621]: time="2025-01-17T12:34:48.034963696Z" level=info msg="TearDown network for sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\" successfully" Jan 17 12:34:48.036800 containerd[1621]: time="2025-01-17T12:34:48.034999855Z" level=info msg="StopPodSandbox for \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\" returns successfully" Jan 17 12:34:48.038099 containerd[1621]: time="2025-01-17T12:34:48.037355478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xk8nd,Uid:d743c195-9b4b-4bf7-a2e2-343eb8b2964b,Namespace:calico-system,Attempt:1,}" Jan 17 12:34:48.053495 systemd-networkd[1263]: calic5845672440: Gained IPv6LL Jan 17 12:34:48.168428 systemd[1]: run-netns-cni\x2d332660cf\x2db718\x2df132\x2d822f\x2d415021608a08.mount: Deactivated successfully. Jan 17 12:34:48.208397 kubelet[2929]: I0117 12:34:48.208264 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kfzfn" podStartSLOduration=42.208212608 podStartE2EDuration="42.208212608s" podCreationTimestamp="2025-01-17 12:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:34:48.203108399 +0000 UTC m=+54.780959084" watchObservedRunningTime="2025-01-17 12:34:48.208212608 +0000 UTC m=+54.786063276" Jan 17 12:34:48.564916 systemd-networkd[1263]: cali6e91beb2c64: Link UP Jan 17 12:34:48.568060 systemd-networkd[1263]: cali6e91beb2c64: Gained carrier Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.153 [INFO][4899] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0 csi-node-driver- calico-system d743c195-9b4b-4bf7-a2e2-343eb8b2964b 826 0 2025-01-17 12:34:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-hkhka.gb1.brightbox.com csi-node-driver-xk8nd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6e91beb2c64 [] []}} ContainerID="71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" Namespace="calico-system" Pod="csi-node-driver-xk8nd" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.153 [INFO][4899] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" Namespace="calico-system" Pod="csi-node-driver-xk8nd" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.409 [INFO][4914] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" HandleID="k8s-pod-network.71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.440 [INFO][4914] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" HandleID="k8s-pod-network.71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000608950), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hkhka.gb1.brightbox.com", "pod":"csi-node-driver-xk8nd", "timestamp":"2025-01-17 12:34:48.407925511 +0000 UTC"}, Hostname:"srv-hkhka.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.441 [INFO][4914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.441 [INFO][4914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.441 [INFO][4914] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hkhka.gb1.brightbox.com' Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.448 [INFO][4914] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.475 [INFO][4914] ipam/ipam.go 372: Looking up existing affinities for host host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.493 [INFO][4914] ipam/ipam.go 489: Trying affinity for 192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.497 [INFO][4914] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.505 [INFO][4914] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.505 [INFO][4914] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.508 [INFO][4914] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.518 [INFO][4914] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.533 [INFO][4914] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.198/26] block=192.168.13.192/26 handle="k8s-pod-network.71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.533 [INFO][4914] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.198/26] handle="k8s-pod-network.71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" host="srv-hkhka.gb1.brightbox.com" Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.534 [INFO][4914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:48.614475 containerd[1621]: 2025-01-17 12:34:48.534 [INFO][4914] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.198/26] IPv6=[] ContainerID="71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" HandleID="k8s-pod-network.71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:48.617031 kubelet[2929]: I0117 12:34:48.614010 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7qcnn" podStartSLOduration=42.613854564 podStartE2EDuration="42.613854564s" podCreationTimestamp="2025-01-17 12:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:34:48.246149607 +0000 UTC m=+54.824000295" watchObservedRunningTime="2025-01-17 12:34:48.613854564 +0000 UTC m=+55.191705220" Jan 17 12:34:48.617708 containerd[1621]: 2025-01-17 12:34:48.540 [INFO][4899] cni-plugin/k8s.go 386: Populated endpoint ContainerID="71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" Namespace="calico-system" Pod="csi-node-driver-xk8nd" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d743c195-9b4b-4bf7-a2e2-343eb8b2964b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-xk8nd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e91beb2c64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:48.617708 containerd[1621]: 2025-01-17 12:34:48.542 [INFO][4899] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.198/32] ContainerID="71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" Namespace="calico-system" Pod="csi-node-driver-xk8nd" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:48.617708 containerd[1621]: 2025-01-17 12:34:48.543 [INFO][4899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e91beb2c64 ContainerID="71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" Namespace="calico-system" Pod="csi-node-driver-xk8nd" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:48.617708 containerd[1621]: 2025-01-17 12:34:48.576 [INFO][4899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" Namespace="calico-system" Pod="csi-node-driver-xk8nd" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:48.617708 containerd[1621]: 2025-01-17 12:34:48.585 [INFO][4899] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" Namespace="calico-system" Pod="csi-node-driver-xk8nd" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d743c195-9b4b-4bf7-a2e2-343eb8b2964b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f", Pod:"csi-node-driver-xk8nd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e91beb2c64", MAC:"6e:cb:34:f8:88:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:48.617708 containerd[1621]: 2025-01-17 12:34:48.604 [INFO][4899] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f" Namespace="calico-system" Pod="csi-node-driver-xk8nd" WorkloadEndpoint="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:48.678178 containerd[1621]: time="2025-01-17T12:34:48.677994627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:34:48.679439 containerd[1621]: time="2025-01-17T12:34:48.679081887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:34:48.679757 containerd[1621]: time="2025-01-17T12:34:48.679661517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:48.682219 containerd[1621]: time="2025-01-17T12:34:48.682051834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:34:48.756507 containerd[1621]: time="2025-01-17T12:34:48.756447521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xk8nd,Uid:d743c195-9b4b-4bf7-a2e2-343eb8b2964b,Namespace:calico-system,Attempt:1,} returns sandbox id \"71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f\"" Jan 17 12:34:48.821647 systemd-networkd[1263]: calif024c139bb6: Gained IPv6LL Jan 17 12:34:49.013641 systemd-networkd[1263]: cali231cd87e8e4: Gained IPv6LL Jan 17 12:34:49.782964 systemd-resolved[1514]: Under memory pressure, flushing caches. Jan 17 12:34:49.787181 systemd-journald[1185]: Under memory pressure, flushing caches. Jan 17 12:34:49.783069 systemd-resolved[1514]: Flushed all caches. Jan 17 12:34:49.958901 containerd[1621]: time="2025-01-17T12:34:49.958820050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:49.960519 containerd[1621]: time="2025-01-17T12:34:49.960302018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:34:49.961571 containerd[1621]: time="2025-01-17T12:34:49.961482805Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:49.969629 containerd[1621]: time="2025-01-17T12:34:49.968942197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:49.973435 containerd[1621]: time="2025-01-17T12:34:49.973266489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.1346851s" Jan 17 12:34:49.973435 containerd[1621]: time="2025-01-17T12:34:49.973416297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:34:49.975824 containerd[1621]: time="2025-01-17T12:34:49.975525760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:34:49.981390 containerd[1621]: time="2025-01-17T12:34:49.981264339Z" level=info msg="CreateContainer within sandbox \"78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:34:50.009257 containerd[1621]: time="2025-01-17T12:34:50.008969334Z" level=info msg="CreateContainer within sandbox \"78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6e629caf9a4022e72b578fe0b461056b10dfda4be4c8b2e21015af5238c95ac3\"" Jan 17 12:34:50.010764 containerd[1621]: time="2025-01-17T12:34:50.010581270Z" level=info msg="StartContainer for \"6e629caf9a4022e72b578fe0b461056b10dfda4be4c8b2e21015af5238c95ac3\"" Jan 17 12:34:50.037917 systemd-networkd[1263]: cali6e91beb2c64: Gained IPv6LL Jan 17 12:34:50.208637 containerd[1621]: time="2025-01-17T12:34:50.208555325Z" level=info msg="StartContainer for \"6e629caf9a4022e72b578fe0b461056b10dfda4be4c8b2e21015af5238c95ac3\" returns successfully" Jan 17 12:34:50.386860 containerd[1621]: time="2025-01-17T12:34:50.386590077Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:50.391304 containerd[1621]: time="2025-01-17T12:34:50.390800836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:34:50.410191 containerd[1621]: time="2025-01-17T12:34:50.410119794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 434.438208ms" Jan 17 12:34:50.410575 containerd[1621]: time="2025-01-17T12:34:50.410403574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:34:50.414674 containerd[1621]: time="2025-01-17T12:34:50.414442004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:34:50.416682 containerd[1621]: time="2025-01-17T12:34:50.416485179Z" level=info msg="CreateContainer within sandbox \"b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:34:50.442695 containerd[1621]: time="2025-01-17T12:34:50.442546759Z" level=info msg="CreateContainer within sandbox \"b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ea77cf13b4107e939c80e105c65e1aeae614f19df26588ddb143374b548016e4\"" Jan 17 12:34:50.446449 containerd[1621]: time="2025-01-17T12:34:50.444271044Z" level=info msg="StartContainer for \"ea77cf13b4107e939c80e105c65e1aeae614f19df26588ddb143374b548016e4\"" Jan 17 12:34:50.620136 containerd[1621]: time="2025-01-17T12:34:50.618831947Z" level=info msg="StartContainer for \"ea77cf13b4107e939c80e105c65e1aeae614f19df26588ddb143374b548016e4\" returns successfully" Jan 17 12:34:51.293741 kubelet[2929]: I0117 12:34:51.293351 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7554c79d77-tz9dc" podStartSLOduration=30.809534129 podStartE2EDuration="36.293158884s" podCreationTimestamp="2025-01-17 12:34:15 +0000 UTC" firstStartedPulling="2025-01-17 12:34:44.928178122 +0000 UTC m=+51.506028777" lastFinishedPulling="2025-01-17 12:34:50.41180287 +0000 UTC m=+56.989653532" observedRunningTime="2025-01-17 12:34:51.24576188 +0000 UTC m=+57.823612550" watchObservedRunningTime="2025-01-17 12:34:51.293158884 +0000 UTC m=+57.871009565" Jan 17 12:34:51.296731 kubelet[2929]: I0117 12:34:51.294581 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7554c79d77-w2z5v" podStartSLOduration=31.151841138 podStartE2EDuration="36.294546177s" podCreationTimestamp="2025-01-17 12:34:15 +0000 UTC" firstStartedPulling="2025-01-17 12:34:44.831326814 +0000 UTC m=+51.409177476" lastFinishedPulling="2025-01-17 12:34:49.974031855 +0000 UTC m=+56.551882515" observedRunningTime="2025-01-17 12:34:51.293896381 +0000 UTC m=+57.871747050" watchObservedRunningTime="2025-01-17 12:34:51.294546177 +0000 UTC m=+57.872396845" Jan 17 12:34:52.248341 kubelet[2929]: I0117 12:34:52.246163 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:34:53.848831 containerd[1621]: time="2025-01-17T12:34:53.844026884Z" level=info msg="StopPodSandbox for \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\"" Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.142 [WARNING][5097] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d743c195-9b4b-4bf7-a2e2-343eb8b2964b", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f", Pod:"csi-node-driver-xk8nd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e91beb2c64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.144 [INFO][5097] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.144 [INFO][5097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" iface="eth0" netns="" Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.144 [INFO][5097] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.144 [INFO][5097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.469 [INFO][5103] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" HandleID="k8s-pod-network.003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.471 [INFO][5103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.472 [INFO][5103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.492 [WARNING][5103] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" HandleID="k8s-pod-network.003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.492 [INFO][5103] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" HandleID="k8s-pod-network.003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.495 [INFO][5103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:54.502554 containerd[1621]: 2025-01-17 12:34:54.499 [INFO][5097] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:54.509707 containerd[1621]: time="2025-01-17T12:34:54.502556400Z" level=info msg="TearDown network for sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\" successfully" Jan 17 12:34:54.509707 containerd[1621]: time="2025-01-17T12:34:54.502603429Z" level=info msg="StopPodSandbox for \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\" returns successfully" Jan 17 12:34:54.509707 containerd[1621]: time="2025-01-17T12:34:54.504091103Z" level=info msg="RemovePodSandbox for \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\"" Jan 17 12:34:54.509707 containerd[1621]: time="2025-01-17T12:34:54.508910932Z" level=info msg="Forcibly stopping sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\"" Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.650 [WARNING][5123] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d743c195-9b4b-4bf7-a2e2-343eb8b2964b", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f", Pod:"csi-node-driver-xk8nd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e91beb2c64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.658 [INFO][5123] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.658 [INFO][5123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" iface="eth0" netns="" Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.658 [INFO][5123] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.658 [INFO][5123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.868 [INFO][5129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" HandleID="k8s-pod-network.003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.879 [INFO][5129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.879 [INFO][5129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.896 [WARNING][5129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" HandleID="k8s-pod-network.003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.896 [INFO][5129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" HandleID="k8s-pod-network.003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Workload="srv--hkhka.gb1.brightbox.com-k8s-csi--node--driver--xk8nd-eth0" Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.908 [INFO][5129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:54.928271 containerd[1621]: 2025-01-17 12:34:54.918 [INFO][5123] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d" Jan 17 12:34:54.928271 containerd[1621]: time="2025-01-17T12:34:54.927606634Z" level=info msg="TearDown network for sandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\" successfully" Jan 17 12:34:55.014327 containerd[1621]: time="2025-01-17T12:34:55.014247350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:55.016789 containerd[1621]: time="2025-01-17T12:34:55.016659324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:34:55.020303 containerd[1621]: time="2025-01-17T12:34:55.019266981Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:55.043345 containerd[1621]: time="2025-01-17T12:34:55.041101011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:55.044824 containerd[1621]: time="2025-01-17T12:34:55.044563970Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:34:55.047642 containerd[1621]: time="2025-01-17T12:34:55.047483968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.632997268s" Jan 17 12:34:55.047642 containerd[1621]: time="2025-01-17T12:34:55.047527868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:34:55.053605 containerd[1621]: time="2025-01-17T12:34:55.053244699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:34:55.063326 containerd[1621]: time="2025-01-17T12:34:55.062963766Z" level=info msg="RemovePodSandbox \"003c7185aeda3e4d9fc7d61840e5caadc82b21c27e3dc0e0f070179b96b2903d\" returns successfully" Jan 17 12:34:55.064302 containerd[1621]: time="2025-01-17T12:34:55.063667395Z" level=info msg="StopPodSandbox for \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\"" Jan 17 12:34:55.101333 containerd[1621]: time="2025-01-17T12:34:55.101006581Z" level=info msg="CreateContainer within sandbox \"51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:34:55.124318 containerd[1621]: time="2025-01-17T12:34:55.124009772Z" level=info msg="CreateContainer within sandbox \"51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b34c9bcb09deb1b22edc497e17f7e1c059c9d2e7f958270a9a03b3d1caa6e237\"" Jan 17 12:34:55.128881 containerd[1621]: time="2025-01-17T12:34:55.126420915Z" level=info msg="StartContainer for \"b34c9bcb09deb1b22edc497e17f7e1c059c9d2e7f958270a9a03b3d1caa6e237\"" Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.214 [WARNING][5149] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0", GenerateName:"calico-apiserver-7554c79d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"53904d6f-b51c-4d8b-962c-f67a32e4ae5e", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7554c79d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda", Pod:"calico-apiserver-7554c79d77-w2z5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif875724c2f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.215 [INFO][5149] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.215 [INFO][5149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" iface="eth0" netns="" Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.215 [INFO][5149] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.215 [INFO][5149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.314 [INFO][5164] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" HandleID="k8s-pod-network.b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.314 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.314 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.331 [WARNING][5164] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" HandleID="k8s-pod-network.b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.331 [INFO][5164] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" HandleID="k8s-pod-network.b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.334 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:55.339581 containerd[1621]: 2025-01-17 12:34:55.337 [INFO][5149] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:55.346768 containerd[1621]: time="2025-01-17T12:34:55.339645976Z" level=info msg="TearDown network for sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\" successfully" Jan 17 12:34:55.346768 containerd[1621]: time="2025-01-17T12:34:55.339702701Z" level=info msg="StopPodSandbox for \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\" returns successfully" Jan 17 12:34:55.346768 containerd[1621]: time="2025-01-17T12:34:55.342719795Z" level=info msg="RemovePodSandbox for \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\"" Jan 17 12:34:55.346768 containerd[1621]: time="2025-01-17T12:34:55.342789002Z" level=info msg="Forcibly stopping sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\"" Jan 17 12:34:55.459726 containerd[1621]: time="2025-01-17T12:34:55.459673068Z" level=info msg="StartContainer for \"b34c9bcb09deb1b22edc497e17f7e1c059c9d2e7f958270a9a03b3d1caa6e237\" returns successfully" Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.519 [WARNING][5198] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0", GenerateName:"calico-apiserver-7554c79d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"53904d6f-b51c-4d8b-962c-f67a32e4ae5e", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7554c79d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"78a7715c4ddbbe3af0ed9757bdc0b6fe598e68f675fc6e6a30b09accfb8b7eda", Pod:"calico-apiserver-7554c79d77-w2z5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif875724c2f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.519 [INFO][5198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.519 [INFO][5198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" iface="eth0" netns="" Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.519 [INFO][5198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.519 [INFO][5198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.572 [INFO][5216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" HandleID="k8s-pod-network.b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.572 [INFO][5216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.572 [INFO][5216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.582 [WARNING][5216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" HandleID="k8s-pod-network.b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.582 [INFO][5216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" HandleID="k8s-pod-network.b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--w2z5v-eth0" Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.585 [INFO][5216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:55.592585 containerd[1621]: 2025-01-17 12:34:55.589 [INFO][5198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b" Jan 17 12:34:55.592585 containerd[1621]: time="2025-01-17T12:34:55.591675593Z" level=info msg="TearDown network for sandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\" successfully" Jan 17 12:34:55.598875 containerd[1621]: time="2025-01-17T12:34:55.598276863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:34:55.598875 containerd[1621]: time="2025-01-17T12:34:55.598551680Z" level=info msg="RemovePodSandbox \"b3a3c1bca2cb6976b9a3183ec0fe961be3ad666e3126afaa6aabdf2bc32bb70b\" returns successfully" Jan 17 12:34:55.601732 containerd[1621]: time="2025-01-17T12:34:55.601689760Z" level=info msg="StopPodSandbox for \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\"" Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.701 [WARNING][5234] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0", GenerateName:"calico-kube-controllers-86cbc7c54b-", Namespace:"calico-system", SelfLink:"", UID:"4980fedd-f173-462d-ae26-6b5775cf7947", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86cbc7c54b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f", Pod:"calico-kube-controllers-86cbc7c54b-hswjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5845672440", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.702 [INFO][5234] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.702 [INFO][5234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" iface="eth0" netns="" Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.702 [INFO][5234] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.702 [INFO][5234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.746 [INFO][5241] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" HandleID="k8s-pod-network.24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.747 [INFO][5241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.747 [INFO][5241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.755 [WARNING][5241] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" HandleID="k8s-pod-network.24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.755 [INFO][5241] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" HandleID="k8s-pod-network.24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.757 [INFO][5241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:55.761150 containerd[1621]: 2025-01-17 12:34:55.759 [INFO][5234] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:55.763666 containerd[1621]: time="2025-01-17T12:34:55.761222926Z" level=info msg="TearDown network for sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\" successfully" Jan 17 12:34:55.763666 containerd[1621]: time="2025-01-17T12:34:55.761259597Z" level=info msg="StopPodSandbox for \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\" returns successfully" Jan 17 12:34:55.763666 containerd[1621]: time="2025-01-17T12:34:55.762389214Z" level=info msg="RemovePodSandbox for \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\"" Jan 17 12:34:55.763666 containerd[1621]: time="2025-01-17T12:34:55.762455274Z" level=info msg="Forcibly stopping sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\"" Jan 17 12:34:55.865421 systemd-journald[1185]: Under memory pressure, flushing caches. Jan 17 12:34:55.862796 systemd-resolved[1514]: Under memory pressure, flushing caches. Jan 17 12:34:55.862833 systemd-resolved[1514]: Flushed all caches. Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.823 [WARNING][5259] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0", GenerateName:"calico-kube-controllers-86cbc7c54b-", Namespace:"calico-system", SelfLink:"", UID:"4980fedd-f173-462d-ae26-6b5775cf7947", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86cbc7c54b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"51239015bd3137a490e4a4118f81fc8ae0105f7dd34537c795f45c62490eaf9f", Pod:"calico-kube-controllers-86cbc7c54b-hswjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5845672440", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.824 [INFO][5259] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.824 [INFO][5259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" iface="eth0" netns="" Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.824 [INFO][5259] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.824 [INFO][5259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.862 [INFO][5265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" HandleID="k8s-pod-network.24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.862 [INFO][5265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.862 [INFO][5265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.876 [WARNING][5265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" HandleID="k8s-pod-network.24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.876 [INFO][5265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" HandleID="k8s-pod-network.24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--kube--controllers--86cbc7c54b--hswjc-eth0" Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.878 [INFO][5265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:55.882926 containerd[1621]: 2025-01-17 12:34:55.881 [INFO][5259] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e" Jan 17 12:34:55.883781 containerd[1621]: time="2025-01-17T12:34:55.882996056Z" level=info msg="TearDown network for sandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\" successfully" Jan 17 12:34:55.900572 containerd[1621]: time="2025-01-17T12:34:55.899639899Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:34:55.900572 containerd[1621]: time="2025-01-17T12:34:55.899720443Z" level=info msg="RemovePodSandbox \"24cf4ae1a81c469a35cb5350d6b21ba0b619bb2f317bdbc9c67a83542be96a8e\" returns successfully" Jan 17 12:34:55.902219 containerd[1621]: time="2025-01-17T12:34:55.902161671Z" level=info msg="StopPodSandbox for \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\"" Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:55.968 [WARNING][5283] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9c1443c2-e0fb-459e-a578-bd981d23c7b4", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e", Pod:"coredns-76f75df574-7qcnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali231cd87e8e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:55.969 [INFO][5283] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:55.969 [INFO][5283] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" iface="eth0" netns="" Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:55.969 [INFO][5283] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:55.969 [INFO][5283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:56.002 [INFO][5290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" HandleID="k8s-pod-network.a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:56.002 [INFO][5290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:56.002 [INFO][5290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:56.011 [WARNING][5290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" HandleID="k8s-pod-network.a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:56.011 [INFO][5290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" HandleID="k8s-pod-network.a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:56.014 [INFO][5290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:56.018605 containerd[1621]: 2025-01-17 12:34:56.016 [INFO][5283] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:56.020433 containerd[1621]: time="2025-01-17T12:34:56.018675739Z" level=info msg="TearDown network for sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\" successfully" Jan 17 12:34:56.020433 containerd[1621]: time="2025-01-17T12:34:56.018731632Z" level=info msg="StopPodSandbox for \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\" returns successfully" Jan 17 12:34:56.020433 containerd[1621]: time="2025-01-17T12:34:56.020339457Z" level=info msg="RemovePodSandbox for \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\"" Jan 17 12:34:56.020433 containerd[1621]: time="2025-01-17T12:34:56.020405625Z" level=info msg="Forcibly stopping sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\"" Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.088 [WARNING][5308] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9c1443c2-e0fb-459e-a578-bd981d23c7b4", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"86bdfe0e3fc84bf27fb5ab36e02bfad60c8d8ae795260cdf1f750d9c0ea9585e", Pod:"coredns-76f75df574-7qcnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali231cd87e8e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.088 [INFO][5308] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.088 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" iface="eth0" netns="" Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.088 [INFO][5308] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.089 [INFO][5308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.124 [INFO][5314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" HandleID="k8s-pod-network.a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.124 [INFO][5314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.124 [INFO][5314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.134 [WARNING][5314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" HandleID="k8s-pod-network.a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.134 [INFO][5314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" HandleID="k8s-pod-network.a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--7qcnn-eth0" Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.136 [INFO][5314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:56.142219 containerd[1621]: 2025-01-17 12:34:56.138 [INFO][5308] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023" Jan 17 12:34:56.144804 containerd[1621]: time="2025-01-17T12:34:56.142422626Z" level=info msg="TearDown network for sandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\" successfully" Jan 17 12:34:56.149347 containerd[1621]: time="2025-01-17T12:34:56.149307956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:34:56.150130 containerd[1621]: time="2025-01-17T12:34:56.149383267Z" level=info msg="RemovePodSandbox \"a90172a8e543b10b82d5a018f71ac476ac97f7be8cb0cfb96b3ae978b0006023\" returns successfully" Jan 17 12:34:56.150274 containerd[1621]: time="2025-01-17T12:34:56.150230962Z" level=info msg="StopPodSandbox for \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\"" Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.221 [WARNING][5333] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0", GenerateName:"calico-apiserver-7554c79d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7554c79d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9", Pod:"calico-apiserver-7554c79d77-tz9dc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid70ec7e51f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.222 [INFO][5333] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.222 [INFO][5333] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" iface="eth0" netns="" Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.222 [INFO][5333] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.222 [INFO][5333] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.264 [INFO][5345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" HandleID="k8s-pod-network.511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.264 [INFO][5345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.264 [INFO][5345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.275 [WARNING][5345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" HandleID="k8s-pod-network.511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.275 [INFO][5345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" HandleID="k8s-pod-network.511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.277 [INFO][5345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:56.281237 containerd[1621]: 2025-01-17 12:34:56.279 [INFO][5333] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:56.283455 containerd[1621]: time="2025-01-17T12:34:56.281252794Z" level=info msg="TearDown network for sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\" successfully" Jan 17 12:34:56.283455 containerd[1621]: time="2025-01-17T12:34:56.281315608Z" level=info msg="StopPodSandbox for \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\" returns successfully" Jan 17 12:34:56.283455 containerd[1621]: time="2025-01-17T12:34:56.282093619Z" level=info msg="RemovePodSandbox for \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\"" Jan 17 12:34:56.283455 containerd[1621]: time="2025-01-17T12:34:56.282133583Z" level=info msg="Forcibly stopping sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\"" Jan 17 12:34:56.349018 kubelet[2929]: I0117 12:34:56.348963 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86cbc7c54b-hswjc" podStartSLOduration=32.872409788 podStartE2EDuration="40.348768115s" podCreationTimestamp="2025-01-17 12:34:16 +0000 UTC" firstStartedPulling="2025-01-17 12:34:47.573378418 +0000 UTC m=+54.151229078" lastFinishedPulling="2025-01-17 12:34:55.049736742 +0000 UTC m=+61.627587405" observedRunningTime="2025-01-17 12:34:56.345181387 +0000 UTC m=+62.923032071" watchObservedRunningTime="2025-01-17 12:34:56.348768115 +0000 UTC m=+62.926618784" Jan 17 12:34:56.436383 systemd[1]: run-containerd-runc-k8s.io-b34c9bcb09deb1b22edc497e17f7e1c059c9d2e7f958270a9a03b3d1caa6e237-runc.2IwvL1.mount: Deactivated successfully. Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.444 [WARNING][5363] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0", GenerateName:"calico-apiserver-7554c79d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"a572b9d7-0e0e-4437-9a7e-b8c38fbcafe7", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7554c79d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"b59e96e732a61e6c9525ec87f04f060f0c908a2d6b485c6cfdf036ed3cd704c9", Pod:"calico-apiserver-7554c79d77-tz9dc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid70ec7e51f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.446 [INFO][5363] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.446 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" iface="eth0" netns="" Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.446 [INFO][5363] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.447 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.565 [INFO][5386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" HandleID="k8s-pod-network.511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.565 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.565 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.574 [WARNING][5386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" HandleID="k8s-pod-network.511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.574 [INFO][5386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" HandleID="k8s-pod-network.511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Workload="srv--hkhka.gb1.brightbox.com-k8s-calico--apiserver--7554c79d77--tz9dc-eth0" Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.577 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:56.583343 containerd[1621]: 2025-01-17 12:34:56.579 [INFO][5363] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa" Jan 17 12:34:56.583343 containerd[1621]: time="2025-01-17T12:34:56.582553431Z" level=info msg="TearDown network for sandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\" successfully" Jan 17 12:34:56.588728 containerd[1621]: time="2025-01-17T12:34:56.588407765Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:34:56.588728 containerd[1621]: time="2025-01-17T12:34:56.588550308Z" level=info msg="RemovePodSandbox \"511c9802a42aa31798bc51d1e0f6a9323d522ee40ebe8ad16a32d96fc3b2b7aa\" returns successfully" Jan 17 12:34:56.591424 containerd[1621]: time="2025-01-17T12:34:56.590808933Z" level=info msg="StopPodSandbox for \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\"" Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.681 [WARNING][5407] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"31f2088e-c721-48df-8cef-797f0799b017", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29", Pod:"coredns-76f75df574-kfzfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif024c139bb6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.681 [INFO][5407] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.681 [INFO][5407] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" iface="eth0" netns="" Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.681 [INFO][5407] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.681 [INFO][5407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.783 [INFO][5417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" HandleID="k8s-pod-network.0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.783 [INFO][5417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.783 [INFO][5417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.815 [WARNING][5417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" HandleID="k8s-pod-network.0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.815 [INFO][5417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" HandleID="k8s-pod-network.0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.818 [INFO][5417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:56.825433 containerd[1621]: 2025-01-17 12:34:56.822 [INFO][5407] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:56.827773 containerd[1621]: time="2025-01-17T12:34:56.825507179Z" level=info msg="TearDown network for sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\" successfully" Jan 17 12:34:56.827773 containerd[1621]: time="2025-01-17T12:34:56.825554666Z" level=info msg="StopPodSandbox for \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\" returns successfully" Jan 17 12:34:56.828558 containerd[1621]: time="2025-01-17T12:34:56.828316515Z" level=info msg="RemovePodSandbox for \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\"" Jan 17 12:34:56.828558 containerd[1621]: time="2025-01-17T12:34:56.828355611Z" level=info msg="Forcibly stopping sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\"" Jan 17 12:34:57.007740 containerd[1621]: time="2025-01-17T12:34:57.007554765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:57.012588 containerd[1621]: time="2025-01-17T12:34:57.012530383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:34:57.020299 containerd[1621]: time="2025-01-17T12:34:57.018329082Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:57.023986 containerd[1621]: time="2025-01-17T12:34:57.023951194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:57.025135 containerd[1621]: time="2025-01-17T12:34:57.025091689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.971782594s" Jan 17 12:34:57.025317 containerd[1621]: time="2025-01-17T12:34:57.025268009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:34:57.029682 containerd[1621]: time="2025-01-17T12:34:57.029639138Z" level=info msg="CreateContainer within sandbox \"71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:34:57.055723 containerd[1621]: time="2025-01-17T12:34:57.055614287Z" level=info msg="CreateContainer within sandbox \"71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9ac3eba1e4f73781a86298962737ba656d411bf7c13ebb82879059ca1e4521cb\"" Jan 17 12:34:57.059091 containerd[1621]: time="2025-01-17T12:34:57.058780507Z" level=info msg="StartContainer for \"9ac3eba1e4f73781a86298962737ba656d411bf7c13ebb82879059ca1e4521cb\"" Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:56.946 [WARNING][5435] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"31f2088e-c721-48df-8cef-797f0799b017", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hkhka.gb1.brightbox.com", ContainerID:"669c12ced0b90697373b853230e93c26e61c7271fdc11b891fd0e437a3060b29", Pod:"coredns-76f75df574-kfzfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif024c139bb6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:56.947 [INFO][5435] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:56.947 [INFO][5435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" iface="eth0" netns="" Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:56.947 [INFO][5435] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:56.947 [INFO][5435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:57.033 [INFO][5442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" HandleID="k8s-pod-network.0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:57.033 [INFO][5442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:57.033 [INFO][5442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:57.065 [WARNING][5442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" HandleID="k8s-pod-network.0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:57.066 [INFO][5442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" HandleID="k8s-pod-network.0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Workload="srv--hkhka.gb1.brightbox.com-k8s-coredns--76f75df574--kfzfn-eth0" Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:57.070 [INFO][5442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:34:57.074523 containerd[1621]: 2025-01-17 12:34:57.072 [INFO][5435] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50" Jan 17 12:34:57.076353 containerd[1621]: time="2025-01-17T12:34:57.075860863Z" level=info msg="TearDown network for sandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\" successfully" Jan 17 12:34:57.083715 containerd[1621]: time="2025-01-17T12:34:57.083654539Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:34:57.083938 containerd[1621]: time="2025-01-17T12:34:57.083822127Z" level=info msg="RemovePodSandbox \"0a96bc3440679916949b21c093304e0f4eabaa35125dccf59001820b6fca8b50\" returns successfully" Jan 17 12:34:57.197926 containerd[1621]: time="2025-01-17T12:34:57.197515110Z" level=info msg="StartContainer for \"9ac3eba1e4f73781a86298962737ba656d411bf7c13ebb82879059ca1e4521cb\" returns successfully" Jan 17 12:34:57.202347 containerd[1621]: time="2025-01-17T12:34:57.202314162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:34:58.984466 containerd[1621]: time="2025-01-17T12:34:58.984391618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:58.985341 containerd[1621]: time="2025-01-17T12:34:58.985224564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:34:58.988541 containerd[1621]: time="2025-01-17T12:34:58.988295933Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:59.007492 containerd[1621]: time="2025-01-17T12:34:59.007428396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:34:59.012084 containerd[1621]: time="2025-01-17T12:34:59.012032518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.809676344s" Jan 17 12:34:59.013312 containerd[1621]: time="2025-01-17T12:34:59.012272041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:34:59.025811 containerd[1621]: time="2025-01-17T12:34:59.025772883Z" level=info msg="CreateContainer within sandbox \"71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:34:59.057688 containerd[1621]: time="2025-01-17T12:34:59.056950523Z" level=info msg="CreateContainer within sandbox \"71c3a32a23e75eb726a6f5eb6731a63d0395d3be95391409f77cb27e5f0ec14f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e661b44ffe2f42b7f89176273e4dee80cb72133522bea576c2ebc5226aa47c3a\"" Jan 17 12:34:59.057882 containerd[1621]: time="2025-01-17T12:34:59.057839723Z" level=info msg="StartContainer for \"e661b44ffe2f42b7f89176273e4dee80cb72133522bea576c2ebc5226aa47c3a\"" Jan 17 12:34:59.219540 containerd[1621]: time="2025-01-17T12:34:59.219484556Z" level=info msg="StartContainer for \"e661b44ffe2f42b7f89176273e4dee80cb72133522bea576c2ebc5226aa47c3a\" returns successfully" Jan 17 12:35:00.009038 kubelet[2929]: I0117 12:35:00.008921 2929 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:35:00.016419 kubelet[2929]: I0117 12:35:00.016395 2929 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:35:08.353548 kubelet[2929]: I0117 12:35:08.352695 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:35:08.387366 kubelet[2929]: I0117 12:35:08.386704 2929 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-xk8nd" podStartSLOduration=42.12991609 podStartE2EDuration="52.386624047s" podCreationTimestamp="2025-01-17 12:34:16 +0000 UTC" firstStartedPulling="2025-01-17 12:34:48.759676248 +0000 UTC m=+55.337526905" lastFinishedPulling="2025-01-17 12:34:59.016384207 +0000 UTC m=+65.594234862" observedRunningTime="2025-01-17 12:34:59.366390193 +0000 UTC m=+65.944240867" watchObservedRunningTime="2025-01-17 12:35:08.386624047 +0000 UTC m=+74.964474715" Jan 17 12:35:23.415716 systemd[1]: Started sshd@9-10.230.31.94:22-139.178.68.195:44220.service - OpenSSH per-connection server daemon (139.178.68.195:44220). Jan 17 12:35:24.506589 sshd[5582]: Accepted publickey for core from 139.178.68.195 port 44220 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:35:24.511353 sshd[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:24.552368 systemd-logind[1600]: New session 12 of user core. Jan 17 12:35:24.558758 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:35:25.752959 sshd[5582]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:25.760929 systemd[1]: sshd@9-10.230.31.94:22-139.178.68.195:44220.service: Deactivated successfully. Jan 17 12:35:25.768124 systemd-logind[1600]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:35:25.768690 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:35:25.774895 systemd-logind[1600]: Removed session 12. Jan 17 12:35:30.915020 systemd[1]: Started sshd@10-10.230.31.94:22-139.178.68.195:54012.service - OpenSSH per-connection server daemon (139.178.68.195:54012). Jan 17 12:35:31.827931 sshd[5605]: Accepted publickey for core from 139.178.68.195 port 54012 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:35:31.830488 sshd[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:31.841275 systemd-logind[1600]: New session 13 of user core. Jan 17 12:35:31.845763 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:35:32.585975 sshd[5605]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:32.591770 systemd[1]: sshd@10-10.230.31.94:22-139.178.68.195:54012.service: Deactivated successfully. Jan 17 12:35:32.597299 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:35:32.599309 systemd-logind[1600]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:35:32.601627 systemd-logind[1600]: Removed session 13. Jan 17 12:35:37.738632 systemd[1]: Started sshd@11-10.230.31.94:22-139.178.68.195:53312.service - OpenSSH per-connection server daemon (139.178.68.195:53312). Jan 17 12:35:38.635758 sshd[5641]: Accepted publickey for core from 139.178.68.195 port 53312 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:35:38.638174 sshd[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:38.646546 systemd-logind[1600]: New session 14 of user core. Jan 17 12:35:38.656155 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:35:39.353744 sshd[5641]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:39.359189 systemd[1]: sshd@11-10.230.31.94:22-139.178.68.195:53312.service: Deactivated successfully. Jan 17 12:35:39.359426 systemd-logind[1600]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:35:39.363261 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:35:39.365012 systemd-logind[1600]: Removed session 14. Jan 17 12:35:39.504611 systemd[1]: Started sshd@12-10.230.31.94:22-139.178.68.195:53320.service - OpenSSH per-connection server daemon (139.178.68.195:53320). Jan 17 12:35:40.419966 sshd[5656]: Accepted publickey for core from 139.178.68.195 port 53320 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:35:40.422809 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:40.431813 systemd-logind[1600]: New session 15 of user core. Jan 17 12:35:40.442802 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:35:41.288542 sshd[5656]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:41.294373 systemd[1]: sshd@12-10.230.31.94:22-139.178.68.195:53320.service: Deactivated successfully. Jan 17 12:35:41.299618 systemd-logind[1600]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:35:41.300023 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:35:41.304038 systemd-logind[1600]: Removed session 15. Jan 17 12:35:41.437686 systemd[1]: Started sshd@13-10.230.31.94:22-139.178.68.195:53328.service - OpenSSH per-connection server daemon (139.178.68.195:53328). Jan 17 12:35:42.347337 sshd[5668]: Accepted publickey for core from 139.178.68.195 port 53328 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:35:42.350821 sshd[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:42.358924 systemd-logind[1600]: New session 16 of user core. Jan 17 12:35:42.365683 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:35:43.056169 sshd[5668]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:43.060831 systemd-logind[1600]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:35:43.061221 systemd[1]: sshd@13-10.230.31.94:22-139.178.68.195:53328.service: Deactivated successfully. Jan 17 12:35:43.065892 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:35:43.067777 systemd-logind[1600]: Removed session 16. Jan 17 12:35:48.207037 systemd[1]: Started sshd@14-10.230.31.94:22-139.178.68.195:33918.service - OpenSSH per-connection server daemon (139.178.68.195:33918). Jan 17 12:35:49.139823 sshd[5703]: Accepted publickey for core from 139.178.68.195 port 33918 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:35:49.143645 sshd[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:49.152669 systemd-logind[1600]: New session 17 of user core. Jan 17 12:35:49.158974 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:35:49.882530 sshd[5703]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:49.889733 systemd[1]: sshd@14-10.230.31.94:22-139.178.68.195:33918.service: Deactivated successfully. Jan 17 12:35:49.894943 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:35:49.896622 systemd-logind[1600]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:35:49.898039 systemd-logind[1600]: Removed session 17. Jan 17 12:35:55.097767 systemd[1]: Started sshd@15-10.230.31.94:22-139.178.68.195:59354.service - OpenSSH per-connection server daemon (139.178.68.195:59354). Jan 17 12:35:56.026222 sshd[5726]: Accepted publickey for core from 139.178.68.195 port 59354 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:35:56.033274 sshd[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:56.045463 systemd-logind[1600]: New session 18 of user core. Jan 17 12:35:56.051708 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:35:56.808845 sshd[5726]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:56.815913 systemd[1]: sshd@15-10.230.31.94:22-139.178.68.195:59354.service: Deactivated successfully. Jan 17 12:35:56.821092 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:35:56.822662 systemd-logind[1600]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:35:56.824497 systemd-logind[1600]: Removed session 18. Jan 17 12:36:01.969232 systemd[1]: Started sshd@16-10.230.31.94:22-139.178.68.195:59358.service - OpenSSH per-connection server daemon (139.178.68.195:59358). Jan 17 12:36:02.897586 sshd[5779]: Accepted publickey for core from 139.178.68.195 port 59358 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:36:02.900799 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:36:02.910436 systemd-logind[1600]: New session 19 of user core. Jan 17 12:36:02.916079 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:36:03.628496 sshd[5779]: pam_unix(sshd:session): session closed for user core Jan 17 12:36:03.632983 systemd[1]: sshd@16-10.230.31.94:22-139.178.68.195:59358.service: Deactivated successfully. Jan 17 12:36:03.638131 systemd-logind[1600]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:36:03.639102 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:36:03.642583 systemd-logind[1600]: Removed session 19. Jan 17 12:36:08.780692 systemd[1]: Started sshd@17-10.230.31.94:22-139.178.68.195:46506.service - OpenSSH per-connection server daemon (139.178.68.195:46506). Jan 17 12:36:09.658047 sshd[5804]: Accepted publickey for core from 139.178.68.195 port 46506 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:36:09.660437 sshd[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:36:09.668679 systemd-logind[1600]: New session 20 of user core. Jan 17 12:36:09.677931 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:36:10.390651 sshd[5804]: pam_unix(sshd:session): session closed for user core Jan 17 12:36:10.395427 systemd[1]: sshd@17-10.230.31.94:22-139.178.68.195:46506.service: Deactivated successfully. Jan 17 12:36:10.396469 systemd-logind[1600]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:36:10.401552 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:36:10.403511 systemd-logind[1600]: Removed session 20. Jan 17 12:36:10.541609 systemd[1]: Started sshd@18-10.230.31.94:22-139.178.68.195:46510.service - OpenSSH per-connection server daemon (139.178.68.195:46510). Jan 17 12:36:11.432081 sshd[5818]: Accepted publickey for core from 139.178.68.195 port 46510 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:36:11.434652 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:36:11.442738 systemd-logind[1600]: New session 21 of user core. Jan 17 12:36:11.450764 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:36:12.441611 sshd[5818]: pam_unix(sshd:session): session closed for user core Jan 17 12:36:12.452340 systemd[1]: sshd@18-10.230.31.94:22-139.178.68.195:46510.service: Deactivated successfully. Jan 17 12:36:12.456677 systemd-logind[1600]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:36:12.457090 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:36:12.462661 systemd-logind[1600]: Removed session 21. Jan 17 12:36:12.592653 systemd[1]: Started sshd@19-10.230.31.94:22-139.178.68.195:46526.service - OpenSSH per-connection server daemon (139.178.68.195:46526). Jan 17 12:36:13.487044 sshd[5830]: Accepted publickey for core from 139.178.68.195 port 46526 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:36:13.489855 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:36:13.497424 systemd-logind[1600]: New session 22 of user core. Jan 17 12:36:13.504777 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:36:16.608004 systemd[1]: run-containerd-runc-k8s.io-33ad3950751d79c174150f50367b692701968be0b5ddd0e672652c8e6645d84b-runc.nWCdny.mount: Deactivated successfully. Jan 17 12:36:17.173494 sshd[5830]: pam_unix(sshd:session): session closed for user core Jan 17 12:36:17.185822 systemd[1]: sshd@19-10.230.31.94:22-139.178.68.195:46526.service: Deactivated successfully. Jan 17 12:36:17.191476 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:36:17.191710 systemd-logind[1600]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:36:17.195809 systemd-logind[1600]: Removed session 22. Jan 17 12:36:17.329628 systemd[1]: Started sshd@20-10.230.31.94:22-139.178.68.195:33784.service - OpenSSH per-connection server daemon (139.178.68.195:33784). Jan 17 12:36:17.845536 systemd-resolved[1514]: Under memory pressure, flushing caches. Jan 17 12:36:17.850435 systemd-journald[1185]: Under memory pressure, flushing caches. Jan 17 12:36:17.847089 systemd-resolved[1514]: Flushed all caches. Jan 17 12:36:18.216213 sshd[5878]: Accepted publickey for core from 139.178.68.195 port 33784 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:36:18.218822 sshd[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:36:18.226776 systemd-logind[1600]: New session 23 of user core. Jan 17 12:36:18.232736 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:36:19.489130 sshd[5878]: pam_unix(sshd:session): session closed for user core Jan 17 12:36:19.497062 systemd[1]: sshd@20-10.230.31.94:22-139.178.68.195:33784.service: Deactivated successfully. Jan 17 12:36:19.501598 systemd-logind[1600]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:36:19.502413 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:36:19.505818 systemd-logind[1600]: Removed session 23. Jan 17 12:36:19.642731 systemd[1]: Started sshd@21-10.230.31.94:22-139.178.68.195:33792.service - OpenSSH per-connection server daemon (139.178.68.195:33792). Jan 17 12:36:20.984645 sshd[5901]: Accepted publickey for core from 139.178.68.195 port 33792 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:36:20.987562 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:36:20.995760 systemd-logind[1600]: New session 24 of user core. Jan 17 12:36:21.007834 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:36:21.705926 sshd[5901]: pam_unix(sshd:session): session closed for user core Jan 17 12:36:21.711122 systemd[1]: sshd@21-10.230.31.94:22-139.178.68.195:33792.service: Deactivated successfully. Jan 17 12:36:21.712666 systemd-logind[1600]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:36:21.716611 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:36:21.718858 systemd-logind[1600]: Removed session 24. Jan 17 12:36:26.856623 systemd[1]: Started sshd@22-10.230.31.94:22-139.178.68.195:39614.service - OpenSSH per-connection server daemon (139.178.68.195:39614). Jan 17 12:36:27.741361 sshd[5918]: Accepted publickey for core from 139.178.68.195 port 39614 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:36:27.744199 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:36:27.751735 systemd-logind[1600]: New session 25 of user core. Jan 17 12:36:27.756687 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:36:28.540410 sshd[5918]: pam_unix(sshd:session): session closed for user core Jan 17 12:36:28.546905 systemd-logind[1600]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:36:28.548039 systemd[1]: sshd@22-10.230.31.94:22-139.178.68.195:39614.service: Deactivated successfully. Jan 17 12:36:28.553244 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:36:28.555522 systemd-logind[1600]: Removed session 25. Jan 17 12:36:33.694680 systemd[1]: Started sshd@23-10.230.31.94:22-139.178.68.195:39616.service - OpenSSH per-connection server daemon (139.178.68.195:39616). Jan 17 12:36:34.605262 sshd[5950]: Accepted publickey for core from 139.178.68.195 port 39616 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:36:34.608414 sshd[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:36:34.619547 systemd-logind[1600]: New session 26 of user core. Jan 17 12:36:34.626748 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:36:35.358785 sshd[5950]: pam_unix(sshd:session): session closed for user core Jan 17 12:36:35.363426 systemd[1]: sshd@23-10.230.31.94:22-139.178.68.195:39616.service: Deactivated successfully. Jan 17 12:36:35.370043 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:36:35.371903 systemd-logind[1600]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:36:35.373635 systemd-logind[1600]: Removed session 26. Jan 17 12:36:40.512449 systemd[1]: Started sshd@24-10.230.31.94:22-139.178.68.195:35444.service - OpenSSH per-connection server daemon (139.178.68.195:35444). Jan 17 12:36:41.406866 sshd[5967]: Accepted publickey for core from 139.178.68.195 port 35444 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 12:36:41.409216 sshd[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:36:41.417080 systemd-logind[1600]: New session 27 of user core. Jan 17 12:36:41.427846 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:36:42.138805 sshd[5967]: pam_unix(sshd:session): session closed for user core Jan 17 12:36:42.145237 systemd[1]: sshd@24-10.230.31.94:22-139.178.68.195:35444.service: Deactivated successfully. Jan 17 12:36:42.148968 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:36:42.149825 systemd-logind[1600]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:36:42.152693 systemd-logind[1600]: Removed session 27.