Jan 15 13:22:23.082522 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 15 13:22:23.082559 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 13:22:23.082574 kernel: BIOS-provided physical RAM map: Jan 15 13:22:23.082597 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 15 13:22:23.082607 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 15 13:22:23.082617 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 15 13:22:23.082628 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 15 13:22:23.082639 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 15 13:22:23.082649 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 15 13:22:23.082659 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 15 13:22:23.082670 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 13:22:23.082680 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 15 13:22:23.082705 kernel: NX (Execute Disable) protection: active Jan 15 13:22:23.082717 kernel: APIC: Static calls initialized Jan 15 13:22:23.082729 kernel: SMBIOS 2.8 present. Jan 15 13:22:23.082744 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 15 13:22:23.082766 kernel: Hypervisor detected: KVM Jan 15 13:22:23.082784 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 15 13:22:23.082795 kernel: kvm-clock: using sched offset of 5056179840 cycles Jan 15 13:22:23.082808 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 13:22:23.082819 kernel: tsc: Detected 2499.998 MHz processor Jan 15 13:22:23.082831 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 15 13:22:23.082843 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 15 13:22:23.082854 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 15 13:22:23.082866 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 15 13:22:23.082877 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 15 13:22:23.082925 kernel: Using GB pages for direct mapping Jan 15 13:22:23.082938 kernel: ACPI: Early table checksum verification disabled Jan 15 13:22:23.082949 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 15 13:22:23.082961 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:22:23.082972 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:22:23.082984 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:22:23.082995 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 15 13:22:23.083006 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:22:23.083018 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:22:23.083034 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:22:23.083046 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 13:22:23.083057 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 15 13:22:23.083069 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 15 13:22:23.083080 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 15 13:22:23.083098 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 15 13:22:23.083110 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 15 13:22:23.083127 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 15 13:22:23.083139 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 15 13:22:23.083151 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 15 13:22:23.083170 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 15 13:22:23.083183 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 15 13:22:23.083195 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 15 13:22:23.083207 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 15 13:22:23.083218 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 15 13:22:23.083236 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 15 13:22:23.083248 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 15 13:22:23.083260 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 15 13:22:23.083272 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 15 13:22:23.083283 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 15 13:22:23.083295 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 15 13:22:23.083307 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 15 13:22:23.083318 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 15 13:22:23.083335 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 15 13:22:23.083354 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 15 13:22:23.083366 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 15 13:22:23.083378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 15 13:22:23.083390 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 15 13:22:23.083402 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 15 13:22:23.083414 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 15 13:22:23.083426 kernel: Zone ranges: Jan 15 13:22:23.083438 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 15 13:22:23.083450 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 15 13:22:23.083466 kernel: Normal empty Jan 15 13:22:23.083478 kernel: Movable zone start for each node Jan 15 13:22:23.083490 kernel: Early memory node ranges Jan 15 13:22:23.083502 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 15 13:22:23.083514 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 15 13:22:23.083526 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 15 13:22:23.083538 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 13:22:23.083550 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 15 13:22:23.083562 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 15 13:22:23.083574 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 15 13:22:23.083591 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 15 13:22:23.083603 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 15 13:22:23.083615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 15 13:22:23.083627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 15 13:22:23.083639 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 15 13:22:23.083655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 15 13:22:23.083667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 15 13:22:23.083679 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 15 13:22:23.083690 kernel: TSC deadline timer available Jan 15 13:22:23.083707 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 15 13:22:23.083720 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 15 13:22:23.083732 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 15 13:22:23.083758 kernel: Booting paravirtualized kernel on KVM Jan 15 13:22:23.083773 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 15 13:22:23.083785 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 15 13:22:23.083797 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 15 13:22:23.083808 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 15 13:22:23.083820 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 15 13:22:23.083838 kernel: kvm-guest: PV spinlocks enabled Jan 15 13:22:23.083850 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 15 13:22:23.083863 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 13:22:23.083876 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 13:22:23.083909 kernel: random: crng init done Jan 15 13:22:23.083922 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 13:22:23.083934 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 15 13:22:23.083945 kernel: Fallback order for Node 0: 0 Jan 15 13:22:23.083964 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 15 13:22:23.083982 kernel: Policy zone: DMA32 Jan 15 13:22:23.083995 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 13:22:23.084007 kernel: software IO TLB: area num 16. Jan 15 13:22:23.084019 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 194828K reserved, 0K cma-reserved) Jan 15 13:22:23.084032 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 15 13:22:23.084044 kernel: Kernel/User page tables isolation: enabled Jan 15 13:22:23.084055 kernel: ftrace: allocating 37918 entries in 149 pages Jan 15 13:22:23.084073 kernel: ftrace: allocated 149 pages with 4 groups Jan 15 13:22:23.084085 kernel: Dynamic Preempt: voluntary Jan 15 13:22:23.084097 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 13:22:23.084110 kernel: rcu: RCU event tracing is enabled. Jan 15 13:22:23.084122 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 15 13:22:23.084135 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 13:22:23.084160 kernel: Rude variant of Tasks RCU enabled. Jan 15 13:22:23.084177 kernel: Tracing variant of Tasks RCU enabled. Jan 15 13:22:23.084190 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 13:22:23.084202 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 15 13:22:23.084215 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 15 13:22:23.084227 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 13:22:23.084244 kernel: Console: colour VGA+ 80x25 Jan 15 13:22:23.084257 kernel: printk: console [tty0] enabled Jan 15 13:22:23.084270 kernel: printk: console [ttyS0] enabled Jan 15 13:22:23.084282 kernel: ACPI: Core revision 20230628 Jan 15 13:22:23.084294 kernel: APIC: Switch to symmetric I/O mode setup Jan 15 13:22:23.084312 kernel: x2apic enabled Jan 15 13:22:23.084325 kernel: APIC: Switched APIC routing to: physical x2apic Jan 15 13:22:23.084337 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 15 13:22:23.084350 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 15 13:22:23.084363 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 15 13:22:23.084375 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 15 13:22:23.084387 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 15 13:22:23.084400 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 15 13:22:23.084412 kernel: Spectre V2 : Mitigation: Retpolines Jan 15 13:22:23.084424 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 15 13:22:23.084442 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 15 13:22:23.084454 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 15 13:22:23.084467 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 15 13:22:23.084479 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 15 13:22:23.084491 kernel: MDS: Mitigation: Clear CPU buffers Jan 15 13:22:23.084504 kernel: MMIO Stale Data: Unknown: No mitigations Jan 15 13:22:23.084516 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 15 13:22:23.084528 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 15 13:22:23.084541 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 15 13:22:23.084553 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 15 13:22:23.084565 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 15 13:22:23.084583 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 15 13:22:23.084596 kernel: Freeing SMP alternatives memory: 32K Jan 15 13:22:23.084614 kernel: pid_max: default: 32768 minimum: 301 Jan 15 13:22:23.084628 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 13:22:23.084640 kernel: landlock: Up and running. Jan 15 13:22:23.084652 kernel: SELinux: Initializing. Jan 15 13:22:23.084665 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 15 13:22:23.084677 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 15 13:22:23.084690 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 15 13:22:23.084702 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 13:22:23.084715 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 13:22:23.084734 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 13:22:23.084755 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 15 13:22:23.084770 kernel: signal: max sigframe size: 1776 Jan 15 13:22:23.084783 kernel: rcu: Hierarchical SRCU implementation. Jan 15 13:22:23.084796 kernel: rcu: Max phase no-delay instances is 400. Jan 15 13:22:23.084808 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 15 13:22:23.084821 kernel: smp: Bringing up secondary CPUs ... Jan 15 13:22:23.084833 kernel: smpboot: x86: Booting SMP configuration: Jan 15 13:22:23.084845 kernel: .... node #0, CPUs: #1 Jan 15 13:22:23.084864 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 15 13:22:23.084877 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 13:22:23.084915 kernel: smpboot: Max logical packages: 16 Jan 15 13:22:23.084928 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 15 13:22:23.084941 kernel: devtmpfs: initialized Jan 15 13:22:23.084954 kernel: x86/mm: Memory block size: 128MB Jan 15 13:22:23.084966 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 13:22:23.084979 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 15 13:22:23.084991 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 13:22:23.085011 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 13:22:23.085023 kernel: audit: initializing netlink subsys (disabled) Jan 15 13:22:23.085036 kernel: audit: type=2000 audit(1736947341.203:1): state=initialized audit_enabled=0 res=1 Jan 15 13:22:23.085048 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 13:22:23.085061 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 15 13:22:23.085073 kernel: cpuidle: using governor menu Jan 15 13:22:23.085086 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 13:22:23.085098 kernel: dca service started, version 1.12.1 Jan 15 13:22:23.085111 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 15 13:22:23.085129 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 15 13:22:23.085142 kernel: PCI: Using configuration type 1 for base access Jan 15 13:22:23.085155 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 15 13:22:23.085167 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 13:22:23.085180 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 13:22:23.085193 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 13:22:23.085205 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 13:22:23.085218 kernel: ACPI: Added _OSI(Module Device) Jan 15 13:22:23.085230 kernel: ACPI: Added _OSI(Processor Device) Jan 15 13:22:23.085249 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 13:22:23.085261 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 13:22:23.085274 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 13:22:23.085286 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 15 13:22:23.085299 kernel: ACPI: Interpreter enabled Jan 15 13:22:23.085311 kernel: ACPI: PM: (supports S0 S5) Jan 15 13:22:23.085324 kernel: ACPI: Using IOAPIC for interrupt routing Jan 15 13:22:23.085336 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 15 13:22:23.085349 kernel: PCI: Using E820 reservations for host bridge windows Jan 15 13:22:23.085367 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 15 13:22:23.085380 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 15 13:22:23.085714 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 15 13:22:23.085941 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 15 13:22:23.086113 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 15 13:22:23.086132 kernel: PCI host bridge to bus 0000:00 Jan 15 13:22:23.086380 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 15 13:22:23.086562 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 15 13:22:23.086727 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 15 13:22:23.086911 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 15 13:22:23.087068 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 15 13:22:23.087225 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 15 13:22:23.087382 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 15 13:22:23.087597 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 15 13:22:23.087856 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 15 13:22:23.088074 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 15 13:22:23.088286 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 15 13:22:23.088465 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 15 13:22:23.088646 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 15 13:22:23.088859 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 15 13:22:23.089062 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 15 13:22:23.089259 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 15 13:22:23.089437 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 15 13:22:23.089623 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 15 13:22:23.089850 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 15 13:22:23.090061 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 15 13:22:23.090288 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 15 13:22:23.090557 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 15 13:22:23.090745 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 15 13:22:23.090967 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 15 13:22:23.091139 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 15 13:22:23.091336 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 15 13:22:23.091525 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 15 13:22:23.091713 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 15 13:22:23.091934 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 15 13:22:23.092153 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 15 13:22:23.092334 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 15 13:22:23.092516 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 15 13:22:23.092686 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 15 13:22:23.092983 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 15 13:22:23.093168 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 15 13:22:23.093347 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 15 13:22:23.093519 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 15 13:22:23.093684 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 15 13:22:23.093935 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 15 13:22:23.094106 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 15 13:22:23.094335 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 15 13:22:23.094511 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 15 13:22:23.094677 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 15 13:22:23.095026 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 15 13:22:23.095199 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 15 13:22:23.095393 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 15 13:22:23.095580 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 15 13:22:23.095760 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 15 13:22:23.095961 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 15 13:22:23.096131 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 13:22:23.096318 kernel: pci_bus 0000:02: extended config space not accessible Jan 15 13:22:23.096518 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 15 13:22:23.096709 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 15 13:22:23.096941 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 15 13:22:23.097114 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 15 13:22:23.097320 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 15 13:22:23.097493 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 15 13:22:23.097661 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 15 13:22:23.097840 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 15 13:22:23.098085 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 13:22:23.098389 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 15 13:22:23.098590 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 15 13:22:23.098780 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 15 13:22:23.099096 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 15 13:22:23.099273 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 13:22:23.099447 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 15 13:22:23.100043 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 15 13:22:23.100227 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 13:22:23.100401 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 15 13:22:23.100572 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 15 13:22:23.100741 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 13:22:23.101006 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 15 13:22:23.101175 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 15 13:22:23.101340 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 13:22:23.101510 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 15 13:22:23.101684 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 15 13:22:23.101862 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 13:22:23.102103 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 15 13:22:23.102296 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 15 13:22:23.102471 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 13:22:23.102492 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 15 13:22:23.102515 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 15 13:22:23.102528 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 15 13:22:23.102549 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 15 13:22:23.102562 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 15 13:22:23.102585 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 15 13:22:23.102599 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 15 13:22:23.102612 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 15 13:22:23.102625 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 15 13:22:23.102646 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 15 13:22:23.102660 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 15 13:22:23.102672 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 15 13:22:23.102700 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 15 13:22:23.102713 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 15 13:22:23.102726 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 15 13:22:23.102740 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 15 13:22:23.102765 kernel: iommu: Default domain type: Translated Jan 15 13:22:23.102779 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 15 13:22:23.102791 kernel: PCI: Using ACPI for IRQ routing Jan 15 13:22:23.102804 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 15 13:22:23.102817 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 15 13:22:23.102837 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 15 13:22:23.103079 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 15 13:22:23.103247 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 15 13:22:23.103412 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 15 13:22:23.103432 kernel: vgaarb: loaded Jan 15 13:22:23.103445 kernel: clocksource: Switched to clocksource kvm-clock Jan 15 13:22:23.103458 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 13:22:23.103471 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 13:22:23.103492 kernel: pnp: PnP ACPI init Jan 15 13:22:23.103693 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 15 13:22:23.103723 kernel: pnp: PnP ACPI: found 5 devices Jan 15 13:22:23.103736 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 15 13:22:23.103761 kernel: NET: Registered PF_INET protocol family Jan 15 13:22:23.103776 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 13:22:23.103789 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 15 13:22:23.103802 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 13:22:23.103823 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 15 13:22:23.103836 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 15 13:22:23.103850 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 15 13:22:23.103862 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 15 13:22:23.103875 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 15 13:22:23.103905 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 13:22:23.103919 kernel: NET: Registered PF_XDP protocol family Jan 15 13:22:23.104088 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 15 13:22:23.104265 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 15 13:22:23.104450 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 15 13:22:23.104630 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 15 13:22:23.104820 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 15 13:22:23.105045 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 15 13:22:23.105216 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 15 13:22:23.105383 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 15 13:22:23.105573 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 15 13:22:23.105745 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 15 13:22:23.105980 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 15 13:22:23.106149 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 15 13:22:23.106313 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 15 13:22:23.106483 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 15 13:22:23.106647 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 15 13:22:23.106837 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 15 13:22:23.107067 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 15 13:22:23.107258 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 15 13:22:23.107446 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 15 13:22:23.107616 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 15 13:22:23.107810 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 15 13:22:23.108024 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 13:22:23.108190 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 15 13:22:23.108355 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 15 13:22:23.108537 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 15 13:22:23.108702 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 13:22:23.108904 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 15 13:22:23.109076 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 15 13:22:23.109242 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 15 13:22:23.109418 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 13:22:23.109588 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 15 13:22:23.109771 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 15 13:22:23.109999 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 15 13:22:23.110169 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 13:22:23.110334 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 15 13:22:23.110500 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 15 13:22:23.110666 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 15 13:22:23.110846 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 13:22:23.111042 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 15 13:22:23.111218 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 15 13:22:23.111383 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 15 13:22:23.111552 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 13:22:23.111720 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 15 13:22:23.112006 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 15 13:22:23.112178 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 15 13:22:23.112345 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 13:22:23.112511 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 15 13:22:23.112676 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 15 13:22:23.112921 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 15 13:22:23.113093 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 13:22:23.113254 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 15 13:22:23.113413 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 15 13:22:23.113587 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 15 13:22:23.113741 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 15 13:22:23.113954 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 15 13:22:23.114108 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 15 13:22:23.114297 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 15 13:22:23.114457 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 15 13:22:23.114630 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 13:22:23.114824 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 15 13:22:23.115038 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 15 13:22:23.115201 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 15 13:22:23.115360 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 13:22:23.115625 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 15 13:22:23.115802 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 15 13:22:23.116017 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 13:22:23.116196 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 15 13:22:23.116374 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 15 13:22:23.116537 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 13:22:23.116725 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 15 13:22:23.116933 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 15 13:22:23.117097 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 13:22:23.117309 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 15 13:22:23.117481 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 15 13:22:23.117642 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 13:22:23.117834 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 15 13:22:23.118046 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 15 13:22:23.118218 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 13:22:23.118409 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 15 13:22:23.118587 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 15 13:22:23.118743 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 13:22:23.118785 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 15 13:22:23.118799 kernel: PCI: CLS 0 bytes, default 64 Jan 15 13:22:23.118813 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 15 13:22:23.118827 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 15 13:22:23.118840 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 15 13:22:23.118854 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 15 13:22:23.118868 kernel: Initialise system trusted keyrings Jan 15 13:22:23.118916 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 15 13:22:23.118930 kernel: Key type asymmetric registered Jan 15 13:22:23.118944 kernel: Asymmetric key parser 'x509' registered Jan 15 13:22:23.118957 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 15 13:22:23.118971 kernel: io scheduler mq-deadline registered Jan 15 13:22:23.118984 kernel: io scheduler kyber registered Jan 15 13:22:23.118997 kernel: io scheduler bfq registered Jan 15 13:22:23.119169 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 15 13:22:23.119339 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 15 13:22:23.119515 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:22:23.119684 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 15 13:22:23.119920 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 15 13:22:23.120094 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:22:23.120263 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 15 13:22:23.120430 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 15 13:22:23.120605 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:22:23.120793 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 15 13:22:23.120991 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 15 13:22:23.121161 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:22:23.121330 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 15 13:22:23.121506 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 15 13:22:23.121686 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:22:23.121914 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 15 13:22:23.122087 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 15 13:22:23.122253 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:22:23.122420 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 15 13:22:23.122585 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 15 13:22:23.122780 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:22:23.122983 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 15 13:22:23.123159 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 15 13:22:23.123326 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 13:22:23.123347 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 15 13:22:23.123362 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 15 13:22:23.123383 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 15 13:22:23.123396 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 13:22:23.123418 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 15 13:22:23.123431 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 15 13:22:23.123445 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 15 13:22:23.123458 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 15 13:22:23.123639 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 15 13:22:23.123670 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 15 13:22:23.123846 kernel: rtc_cmos 00:03: registered as rtc0 Jan 15 13:22:23.124051 kernel: rtc_cmos 00:03: setting system clock to 2025-01-15T13:22:22 UTC (1736947342) Jan 15 13:22:23.124208 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 15 13:22:23.124228 kernel: intel_pstate: CPU model not supported Jan 15 13:22:23.124242 kernel: NET: Registered PF_INET6 protocol family Jan 15 13:22:23.124256 kernel: Segment Routing with IPv6 Jan 15 13:22:23.124270 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 13:22:23.124283 kernel: NET: Registered PF_PACKET protocol family Jan 15 13:22:23.124297 kernel: Key type dns_resolver registered Jan 15 13:22:23.124319 kernel: IPI shorthand broadcast: enabled Jan 15 13:22:23.124333 kernel: sched_clock: Marking stable (1478005266, 236795446)->(1967290418, -252489706) Jan 15 13:22:23.124347 kernel: registered taskstats version 1 Jan 15 13:22:23.124360 kernel: Loading compiled-in X.509 certificates Jan 15 13:22:23.124374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 15 13:22:23.124387 kernel: Key type .fscrypt registered Jan 15 13:22:23.124400 kernel: Key type fscrypt-provisioning registered Jan 15 13:22:23.124414 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 13:22:23.124433 kernel: ima: Allocated hash algorithm: sha1 Jan 15 13:22:23.124447 kernel: ima: No architecture policies found Jan 15 13:22:23.124460 kernel: clk: Disabling unused clocks Jan 15 13:22:23.124474 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 15 13:22:23.124487 kernel: Write protecting the kernel read-only data: 36864k Jan 15 13:22:23.124501 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 15 13:22:23.124515 kernel: Run /init as init process Jan 15 13:22:23.124528 kernel: with arguments: Jan 15 13:22:23.124542 kernel: /init Jan 15 13:22:23.124555 kernel: with environment: Jan 15 13:22:23.124574 kernel: HOME=/ Jan 15 13:22:23.124587 kernel: TERM=linux Jan 15 13:22:23.124608 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 13:22:23.124624 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 13:22:23.124641 systemd[1]: Detected virtualization kvm. Jan 15 13:22:23.124660 systemd[1]: Detected architecture x86-64. Jan 15 13:22:23.124674 systemd[1]: Running in initrd. Jan 15 13:22:23.124694 systemd[1]: No hostname configured, using default hostname. Jan 15 13:22:23.124708 systemd[1]: Hostname set to . Jan 15 13:22:23.124732 systemd[1]: Initializing machine ID from VM UUID. Jan 15 13:22:23.124757 systemd[1]: Queued start job for default target initrd.target. Jan 15 13:22:23.124774 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 13:22:23.124797 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 13:22:23.124812 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 13:22:23.124827 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 13:22:23.124848 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 13:22:23.124863 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 13:22:23.124904 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 13:22:23.124923 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 13:22:23.124938 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 13:22:23.124952 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 13:22:23.124967 systemd[1]: Reached target paths.target - Path Units. Jan 15 13:22:23.124989 systemd[1]: Reached target slices.target - Slice Units. Jan 15 13:22:23.125004 systemd[1]: Reached target swap.target - Swaps. Jan 15 13:22:23.125019 systemd[1]: Reached target timers.target - Timer Units. Jan 15 13:22:23.125033 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 13:22:23.125048 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 13:22:23.125063 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 13:22:23.125082 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 13:22:23.125097 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 13:22:23.125112 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 13:22:23.125133 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 13:22:23.125157 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 13:22:23.125171 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 13:22:23.125186 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 13:22:23.125200 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 13:22:23.125220 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 13:22:23.125234 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 13:22:23.125248 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 13:22:23.125268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 13:22:23.125283 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 13:22:23.125298 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 13:22:23.125312 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 13:22:23.125327 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 13:22:23.125398 systemd-journald[201]: Collecting audit messages is disabled. Jan 15 13:22:23.125432 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 13:22:23.125449 systemd-journald[201]: Journal started Jan 15 13:22:23.125483 systemd-journald[201]: Runtime Journal (/run/log/journal/75e65bbd26e74fc2bd060748fc5899a5) is 4.7M, max 38.0M, 33.2M free. Jan 15 13:22:23.099647 systemd-modules-load[202]: Inserted module 'overlay' Jan 15 13:22:23.201102 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 13:22:23.201141 kernel: Bridge firewalling registered Jan 15 13:22:23.201161 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 13:22:23.163029 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 15 13:22:23.202271 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 13:22:23.203438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 13:22:23.212234 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 13:22:23.214059 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 13:22:23.222145 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 13:22:23.229480 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 13:22:23.248220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 13:22:23.250619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 13:22:23.256337 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 13:22:23.266130 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 13:22:23.267397 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 13:22:23.271085 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 13:22:23.286807 dracut-cmdline[233]: dracut-dracut-053 Jan 15 13:22:23.293335 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 13:22:23.325896 systemd-resolved[236]: Positive Trust Anchors: Jan 15 13:22:23.327086 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 13:22:23.327133 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 13:22:23.334857 systemd-resolved[236]: Defaulting to hostname 'linux'. Jan 15 13:22:23.336814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 13:22:23.337965 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 13:22:23.402932 kernel: SCSI subsystem initialized Jan 15 13:22:23.414935 kernel: Loading iSCSI transport class v2.0-870. Jan 15 13:22:23.427915 kernel: iscsi: registered transport (tcp) Jan 15 13:22:23.454974 kernel: iscsi: registered transport (qla4xxx) Jan 15 13:22:23.455034 kernel: QLogic iSCSI HBA Driver Jan 15 13:22:23.514333 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 13:22:23.521095 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 13:22:23.564182 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 13:22:23.564281 kernel: device-mapper: uevent: version 1.0.3 Jan 15 13:22:23.565206 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 13:22:23.615935 kernel: raid6: sse2x4 gen() 6974 MB/s Jan 15 13:22:23.633914 kernel: raid6: sse2x2 gen() 4973 MB/s Jan 15 13:22:23.652592 kernel: raid6: sse2x1 gen() 5080 MB/s Jan 15 13:22:23.652654 kernel: raid6: using algorithm sse2x4 gen() 6974 MB/s Jan 15 13:22:23.671626 kernel: raid6: .... xor() 4825 MB/s, rmw enabled Jan 15 13:22:23.671673 kernel: raid6: using ssse3x2 recovery algorithm Jan 15 13:22:23.697927 kernel: xor: automatically using best checksumming function avx Jan 15 13:22:23.903733 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 13:22:23.930710 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 13:22:23.942356 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 13:22:23.976588 systemd-udevd[419]: Using default interface naming scheme 'v255'. Jan 15 13:22:23.983993 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 13:22:23.993085 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 13:22:24.032674 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 15 13:22:24.081120 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 13:22:24.088158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 13:22:24.216019 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 13:22:24.225511 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 13:22:24.256697 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 13:22:24.259965 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 13:22:24.262633 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 13:22:24.263459 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 13:22:24.273073 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 13:22:24.306908 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 13:22:24.363087 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 15 13:22:24.440586 kernel: cryptd: max_cpu_qlen set to 1000 Jan 15 13:22:24.440615 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 15 13:22:24.440931 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 13:22:24.440983 kernel: GPT:17805311 != 125829119 Jan 15 13:22:24.441013 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 13:22:24.441031 kernel: GPT:17805311 != 125829119 Jan 15 13:22:24.441048 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 13:22:24.441110 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 13:22:24.441136 kernel: AVX version of gcm_enc/dec engaged. Jan 15 13:22:24.441156 kernel: AES CTR mode by8 optimization enabled Jan 15 13:22:24.447781 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 13:22:24.448022 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 13:22:24.455240 kernel: libata version 3.00 loaded. Jan 15 13:22:24.449124 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 13:22:24.478211 kernel: ACPI: bus type USB registered Jan 15 13:22:24.478256 kernel: usbcore: registered new interface driver usbfs Jan 15 13:22:24.478344 kernel: usbcore: registered new interface driver hub Jan 15 13:22:24.478367 kernel: usbcore: registered new device driver usb Jan 15 13:22:24.459985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 13:22:24.460216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 13:22:24.480473 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 13:22:24.494395 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 13:22:24.516503 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 15 13:22:24.520341 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 15 13:22:24.520573 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 15 13:22:24.520812 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 15 13:22:24.522145 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 15 13:22:24.522399 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 15 13:22:24.525978 kernel: hub 1-0:1.0: USB hub found Jan 15 13:22:24.526298 kernel: hub 1-0:1.0: 4 ports detected Jan 15 13:22:24.526581 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 15 13:22:24.526871 kernel: hub 2-0:1.0: USB hub found Jan 15 13:22:24.529197 kernel: hub 2-0:1.0: 4 ports detected Jan 15 13:22:24.547917 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (472) Jan 15 13:22:24.577919 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (477) Jan 15 13:22:24.586913 kernel: ahci 0000:00:1f.2: version 3.0 Jan 15 13:22:24.611658 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 15 13:22:24.611691 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 15 13:22:24.611977 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 15 13:22:24.612185 kernel: scsi host0: ahci Jan 15 13:22:24.612436 kernel: scsi host1: ahci Jan 15 13:22:24.612736 kernel: scsi host2: ahci Jan 15 13:22:24.612957 kernel: scsi host3: ahci Jan 15 13:22:24.613163 kernel: scsi host4: ahci Jan 15 13:22:24.613362 kernel: scsi host5: ahci Jan 15 13:22:24.613556 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 15 13:22:24.613611 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 15 13:22:24.613634 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 15 13:22:24.613652 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 15 13:22:24.613672 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 15 13:22:24.613690 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 15 13:22:24.599563 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 15 13:22:24.686077 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 15 13:22:24.688309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 13:22:24.697285 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 15 13:22:24.705439 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 15 13:22:24.712714 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 13:22:24.725181 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 13:22:24.730512 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 13:22:24.737130 disk-uuid[564]: Primary Header is updated. Jan 15 13:22:24.737130 disk-uuid[564]: Secondary Entries is updated. Jan 15 13:22:24.737130 disk-uuid[564]: Secondary Header is updated. Jan 15 13:22:24.743910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 13:22:24.750950 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 15 13:22:24.751032 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 13:22:24.775342 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 13:22:24.915918 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 13:22:24.922915 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 15 13:22:24.922962 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 15 13:22:24.925170 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 15 13:22:24.927904 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 15 13:22:24.931331 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 15 13:22:24.931374 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 15 13:22:24.947502 kernel: usbcore: registered new interface driver usbhid Jan 15 13:22:24.947564 kernel: usbhid: USB HID core driver Jan 15 13:22:24.955632 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 15 13:22:24.955725 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 15 13:22:25.754941 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 13:22:25.756148 disk-uuid[566]: The operation has completed successfully. Jan 15 13:22:25.806688 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 13:22:25.806852 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 13:22:25.825112 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 13:22:25.831690 sh[585]: Success Jan 15 13:22:25.848915 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 15 13:22:25.919027 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 13:22:25.930045 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 13:22:25.934770 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 13:22:25.962554 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 15 13:22:25.962640 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 15 13:22:25.966460 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 13:22:25.966499 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 13:22:25.969790 kernel: BTRFS info (device dm-0): using free space tree Jan 15 13:22:25.980759 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 13:22:25.982334 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 13:22:25.989105 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 13:22:25.993969 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 13:22:26.009322 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 13:22:26.009390 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 13:22:26.009410 kernel: BTRFS info (device vda6): using free space tree Jan 15 13:22:26.015919 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 13:22:26.031130 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 13:22:26.030812 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 13:22:26.039109 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 13:22:26.045138 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 13:22:26.144962 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 13:22:26.212369 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 13:22:26.243583 systemd-networkd[769]: lo: Link UP Jan 15 13:22:26.243598 systemd-networkd[769]: lo: Gained carrier Jan 15 13:22:26.246966 systemd-networkd[769]: Enumeration completed Jan 15 13:22:26.247092 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 13:22:26.248062 systemd[1]: Reached target network.target - Network. Jan 15 13:22:26.249291 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 13:22:26.249297 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 13:22:26.252692 systemd-networkd[769]: eth0: Link UP Jan 15 13:22:26.252702 systemd-networkd[769]: eth0: Gained carrier Jan 15 13:22:26.252715 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 13:22:26.291985 systemd-networkd[769]: eth0: DHCPv4 address 10.230.58.186/30, gateway 10.230.58.185 acquired from 10.230.58.185 Jan 15 13:22:26.335540 ignition[672]: Ignition 2.19.0 Jan 15 13:22:26.338355 ignition[672]: Stage: fetch-offline Jan 15 13:22:26.338487 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 15 13:22:26.338510 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:22:26.338726 ignition[672]: parsed url from cmdline: "" Jan 15 13:22:26.338734 ignition[672]: no config URL provided Jan 15 13:22:26.338744 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 13:22:26.342967 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 13:22:26.338762 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 15 13:22:26.338771 ignition[672]: failed to fetch config: resource requires networking Jan 15 13:22:26.340856 ignition[672]: Ignition finished successfully Jan 15 13:22:26.350112 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 13:22:26.374451 ignition[777]: Ignition 2.19.0 Jan 15 13:22:26.374483 ignition[777]: Stage: fetch Jan 15 13:22:26.374737 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 15 13:22:26.374761 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:22:26.374955 ignition[777]: parsed url from cmdline: "" Jan 15 13:22:26.374962 ignition[777]: no config URL provided Jan 15 13:22:26.374972 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 13:22:26.374990 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 15 13:22:26.375173 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 15 13:22:26.375718 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 15 13:22:26.375786 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 15 13:22:26.391252 ignition[777]: GET result: OK Jan 15 13:22:26.392268 ignition[777]: parsing config with SHA512: b5593fc3c6a3e6431a32fb03194858f19c896a3026ae64ec818c2ef5c3e7095bfe6b32e34b2e8d387b0c03851e063a3f449936a01c2748dd9681e93614ba0118 Jan 15 13:22:26.401470 unknown[777]: fetched base config from "system" Jan 15 13:22:26.402404 ignition[777]: fetch: fetch complete Jan 15 13:22:26.401490 unknown[777]: fetched base config from "system" Jan 15 13:22:26.402489 ignition[777]: fetch: fetch passed Jan 15 13:22:26.401505 unknown[777]: fetched user config from "openstack" Jan 15 13:22:26.402573 ignition[777]: Ignition finished successfully Jan 15 13:22:26.404701 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 13:22:26.414159 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 13:22:26.437203 ignition[783]: Ignition 2.19.0 Jan 15 13:22:26.437227 ignition[783]: Stage: kargs Jan 15 13:22:26.437475 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 15 13:22:26.437510 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:22:26.441972 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 13:22:26.438749 ignition[783]: kargs: kargs passed Jan 15 13:22:26.438825 ignition[783]: Ignition finished successfully Jan 15 13:22:26.451116 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 13:22:26.479791 ignition[789]: Ignition 2.19.0 Jan 15 13:22:26.479806 ignition[789]: Stage: disks Jan 15 13:22:26.480126 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 15 13:22:26.480149 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:22:26.481405 ignition[789]: disks: disks passed Jan 15 13:22:26.481491 ignition[789]: Ignition finished successfully Jan 15 13:22:26.484348 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 13:22:26.487293 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 13:22:26.488324 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 13:22:26.490042 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 13:22:26.491617 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 13:22:26.493152 systemd[1]: Reached target basic.target - Basic System. Jan 15 13:22:26.504107 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 13:22:26.527958 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 15 13:22:26.532092 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 13:22:26.541043 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 13:22:26.718979 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 15 13:22:26.720697 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 13:22:26.722199 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 13:22:26.737064 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 13:22:26.740441 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 13:22:26.742522 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 15 13:22:26.749102 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 15 13:22:26.755314 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Jan 15 13:22:26.753264 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 13:22:26.753311 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 13:22:26.767165 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 13:22:26.767195 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 13:22:26.767231 kernel: BTRFS info (device vda6): using free space tree Jan 15 13:22:26.764953 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 13:22:26.780397 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 13:22:26.780754 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 13:22:26.785110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 13:22:26.861877 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 13:22:26.870523 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 15 13:22:26.880391 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 13:22:26.890080 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 13:22:27.012325 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 13:22:27.018008 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 13:22:27.025108 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 13:22:27.038180 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 13:22:27.039204 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 13:22:27.167866 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 13:22:27.186442 ignition[926]: INFO : Ignition 2.19.0 Jan 15 13:22:27.186442 ignition[926]: INFO : Stage: mount Jan 15 13:22:27.188504 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 13:22:27.188504 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:22:27.188504 ignition[926]: INFO : mount: mount passed Jan 15 13:22:27.188504 ignition[926]: INFO : Ignition finished successfully Jan 15 13:22:27.190267 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 13:22:27.874308 systemd-networkd[769]: eth0: Gained IPv6LL Jan 15 13:22:29.382793 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8eae:24:19ff:fee6:3aba/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8eae:24:19ff:fee6:3aba/64 assigned by NDisc. Jan 15 13:22:29.382810 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 15 13:22:33.940425 coreos-metadata[807]: Jan 15 13:22:33.940 WARN failed to locate config-drive, using the metadata service API instead Jan 15 13:22:33.962632 coreos-metadata[807]: Jan 15 13:22:33.962 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 15 13:22:33.977951 coreos-metadata[807]: Jan 15 13:22:33.977 INFO Fetch successful Jan 15 13:22:33.979144 coreos-metadata[807]: Jan 15 13:22:33.979 INFO wrote hostname srv-yypfq.gb1.brightbox.com to /sysroot/etc/hostname Jan 15 13:22:33.981513 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 15 13:22:33.981709 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 15 13:22:33.988993 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 13:22:34.020371 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 13:22:34.030921 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Jan 15 13:22:34.034288 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 13:22:34.034325 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 13:22:34.036140 kernel: BTRFS info (device vda6): using free space tree Jan 15 13:22:34.042176 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 13:22:34.044706 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 13:22:34.081751 ignition[959]: INFO : Ignition 2.19.0 Jan 15 13:22:34.082959 ignition[959]: INFO : Stage: files Jan 15 13:22:34.083953 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 13:22:34.085992 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:22:34.087064 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 15 13:22:34.088112 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 13:22:34.088112 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 13:22:34.091026 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 13:22:34.092053 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 13:22:34.092053 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 13:22:34.091729 unknown[959]: wrote ssh authorized keys file for user: core Jan 15 13:22:34.095012 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 15 13:22:34.095012 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 15 13:22:34.095012 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 15 13:22:34.095012 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 15 13:22:34.324981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 15 13:22:34.873547 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 15 13:22:34.875149 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 15 13:22:34.875149 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 13:22:34.875149 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 13:22:34.875149 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 13:22:34.875149 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 13:22:34.875149 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 13:22:34.875149 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 13:22:34.875149 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 13:22:34.890640 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 13:22:34.890640 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 13:22:34.890640 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 13:22:34.890640 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 13:22:34.890640 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 13:22:34.890640 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 15 13:22:35.421460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 15 13:22:38.754230 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 13:22:38.754230 ignition[959]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 15 13:22:38.758058 ignition[959]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 15 13:22:38.758058 ignition[959]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 15 13:22:38.758058 ignition[959]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 15 13:22:38.758058 ignition[959]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 15 13:22:38.758058 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 13:22:38.758058 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 13:22:38.758058 ignition[959]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 15 13:22:38.758058 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 15 13:22:38.758058 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 13:22:38.771777 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 13:22:38.771777 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 13:22:38.771777 ignition[959]: INFO : files: files passed Jan 15 13:22:38.771777 ignition[959]: INFO : Ignition finished successfully Jan 15 13:22:38.763559 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 13:22:38.777215 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 13:22:38.788156 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 13:22:38.802132 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 13:22:38.802394 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 13:22:38.810355 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 13:22:38.810355 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 13:22:38.814065 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 13:22:38.816582 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 13:22:38.818428 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 13:22:38.827161 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 13:22:38.867453 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 13:22:38.867654 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 13:22:38.869920 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 13:22:38.871279 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 13:22:38.873013 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 13:22:38.889148 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 13:22:38.908321 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 13:22:38.917128 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 13:22:38.930857 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 13:22:38.931871 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 13:22:38.933636 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 13:22:38.935180 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 13:22:38.935368 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 13:22:38.937326 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 13:22:38.938352 systemd[1]: Stopped target basic.target - Basic System. Jan 15 13:22:38.939962 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 13:22:38.941350 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 13:22:38.942906 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 13:22:38.944591 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 13:22:38.946304 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 13:22:38.948002 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 13:22:38.949611 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 13:22:38.951170 systemd[1]: Stopped target swap.target - Swaps. Jan 15 13:22:38.952564 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 13:22:38.952767 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 13:22:38.954590 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 13:22:38.955575 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 13:22:38.957106 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 13:22:38.957301 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 13:22:38.958773 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 13:22:38.958987 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 13:22:38.961053 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 13:22:38.961245 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 13:22:38.962946 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 13:22:38.963105 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 13:22:38.974221 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 13:22:38.975009 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 13:22:38.975263 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 13:22:38.991528 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 13:22:38.993002 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 13:22:38.993202 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 13:22:38.994514 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 13:22:38.994722 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 13:22:39.004219 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 13:22:39.005213 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 13:22:39.008053 ignition[1012]: INFO : Ignition 2.19.0 Jan 15 13:22:39.008053 ignition[1012]: INFO : Stage: umount Jan 15 13:22:39.008053 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 13:22:39.008053 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 13:22:39.014548 ignition[1012]: INFO : umount: umount passed Jan 15 13:22:39.014548 ignition[1012]: INFO : Ignition finished successfully Jan 15 13:22:39.010451 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 13:22:39.011157 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 13:22:39.015453 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 13:22:39.015636 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 13:22:39.017326 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 13:22:39.017399 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 13:22:39.021303 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 13:22:39.021377 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 13:22:39.022727 systemd[1]: Stopped target network.target - Network. Jan 15 13:22:39.024034 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 13:22:39.024110 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 13:22:39.025583 systemd[1]: Stopped target paths.target - Path Units. Jan 15 13:22:39.028006 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 13:22:39.031972 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 13:22:39.033077 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 13:22:39.037008 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 13:22:39.038455 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 13:22:39.038530 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 13:22:39.039838 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 13:22:39.039925 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 13:22:39.041238 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 13:22:39.041344 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 13:22:39.042653 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 13:22:39.042725 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 13:22:39.044359 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 13:22:39.046963 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 13:22:39.051220 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 13:22:39.052957 systemd-networkd[769]: eth0: DHCPv6 lease lost Jan 15 13:22:39.053320 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 13:22:39.053467 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 13:22:39.055848 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 13:22:39.056126 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 13:22:39.060327 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 13:22:39.060548 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 13:22:39.064400 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 13:22:39.064660 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 13:22:39.066126 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 13:22:39.066203 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 13:22:39.076043 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 13:22:39.077289 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 13:22:39.077367 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 13:22:39.080272 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 13:22:39.080347 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 13:22:39.082026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 13:22:39.082098 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 13:22:39.083411 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 13:22:39.083480 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 13:22:39.085314 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 13:22:39.102625 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 13:22:39.102928 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 13:22:39.104582 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 13:22:39.104727 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 13:22:39.107744 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 13:22:39.107862 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 13:22:39.109275 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 13:22:39.109338 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 13:22:39.110754 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 13:22:39.110825 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 13:22:39.113062 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 13:22:39.113131 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 13:22:39.114635 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 13:22:39.114718 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 13:22:39.122086 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 13:22:39.123022 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 13:22:39.123107 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 13:22:39.124721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 13:22:39.124792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 13:22:39.141142 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 13:22:39.141330 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 13:22:39.143538 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 13:22:39.160603 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 13:22:39.170921 systemd[1]: Switching root. Jan 15 13:22:39.205312 systemd-journald[201]: Journal stopped Jan 15 13:22:40.759336 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 15 13:22:40.759436 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 13:22:40.759463 kernel: SELinux: policy capability open_perms=1 Jan 15 13:22:40.759482 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 13:22:40.759508 kernel: SELinux: policy capability always_check_network=0 Jan 15 13:22:40.759537 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 13:22:40.759557 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 13:22:40.759590 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 13:22:40.759610 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 13:22:40.759629 kernel: audit: type=1403 audit(1736947359.512:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 13:22:40.759649 systemd[1]: Successfully loaded SELinux policy in 61.358ms. Jan 15 13:22:40.759690 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.141ms. Jan 15 13:22:40.759713 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 13:22:40.759733 systemd[1]: Detected virtualization kvm. Jan 15 13:22:40.759754 systemd[1]: Detected architecture x86-64. Jan 15 13:22:40.759803 systemd[1]: Detected first boot. Jan 15 13:22:40.759825 systemd[1]: Hostname set to . Jan 15 13:22:40.759846 systemd[1]: Initializing machine ID from VM UUID. Jan 15 13:22:40.759865 zram_generator::config[1074]: No configuration found. Jan 15 13:22:40.759909 systemd[1]: Populated /etc with preset unit settings. Jan 15 13:22:40.759931 systemd[1]: Queued start job for default target multi-user.target. Jan 15 13:22:40.759951 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 15 13:22:40.759973 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 13:22:40.760008 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 13:22:40.760030 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 13:22:40.760050 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 13:22:40.760069 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 13:22:40.760094 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 13:22:40.760114 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 13:22:40.760133 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 13:22:40.760157 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 13:22:40.760177 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 13:22:40.760218 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 13:22:40.760255 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 13:22:40.760277 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 13:22:40.760297 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 13:22:40.760332 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 13:22:40.760353 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 13:22:40.760373 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 13:22:40.760393 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 13:22:40.760426 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 13:22:40.760448 systemd[1]: Reached target slices.target - Slice Units. Jan 15 13:22:40.760469 systemd[1]: Reached target swap.target - Swaps. Jan 15 13:22:40.760489 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 13:22:40.760509 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 13:22:40.760528 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 13:22:40.760560 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 13:22:40.760607 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 13:22:40.760630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 13:22:40.760650 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 13:22:40.760670 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 13:22:40.760690 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 13:22:40.760709 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 13:22:40.760741 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 13:22:40.760763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:22:40.760783 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 13:22:40.760803 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 13:22:40.760837 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 13:22:40.760858 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 13:22:40.762619 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 13:22:40.762675 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 13:22:40.762725 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 13:22:40.762780 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 13:22:40.762816 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 13:22:40.762847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 13:22:40.762877 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 13:22:40.762941 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 13:22:40.762976 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 13:22:40.763007 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 15 13:22:40.763038 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 15 13:22:40.763094 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 13:22:40.763117 kernel: fuse: init (API version 7.39) Jan 15 13:22:40.763137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 13:22:40.763157 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 13:22:40.763177 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 13:22:40.763207 kernel: loop: module loaded Jan 15 13:22:40.763265 systemd-journald[1187]: Collecting audit messages is disabled. Jan 15 13:22:40.763316 systemd-journald[1187]: Journal started Jan 15 13:22:40.763369 systemd-journald[1187]: Runtime Journal (/run/log/journal/75e65bbd26e74fc2bd060748fc5899a5) is 4.7M, max 38.0M, 33.2M free. Jan 15 13:22:40.766807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 13:22:40.778911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:22:40.791913 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 13:22:40.795939 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 13:22:40.796872 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 13:22:40.800682 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 13:22:40.801628 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 13:22:40.802496 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 13:22:40.803401 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 13:22:40.823054 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 13:22:40.824314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 13:22:40.825579 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 13:22:40.825836 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 13:22:40.827538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 13:22:40.827786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 13:22:40.830180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 13:22:40.830449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 13:22:40.831721 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 13:22:40.832014 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 13:22:40.834935 kernel: ACPI: bus type drm_connector registered Jan 15 13:22:40.834995 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 13:22:40.835311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 13:22:40.837294 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 13:22:40.838396 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 13:22:40.840684 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 13:22:40.845134 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 13:22:40.846456 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 13:22:40.861232 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 13:22:40.869043 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 13:22:40.876006 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 13:22:40.882002 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 13:22:40.890119 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 13:22:40.906268 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 13:22:40.907320 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 13:22:40.914603 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 13:22:40.915507 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 13:22:40.925121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 13:22:40.937367 systemd-journald[1187]: Time spent on flushing to /var/log/journal/75e65bbd26e74fc2bd060748fc5899a5 is 50.062ms for 1123 entries. Jan 15 13:22:40.937367 systemd-journald[1187]: System Journal (/var/log/journal/75e65bbd26e74fc2bd060748fc5899a5) is 8.0M, max 584.8M, 576.8M free. Jan 15 13:22:41.045467 systemd-journald[1187]: Received client request to flush runtime journal. Jan 15 13:22:40.946097 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 13:22:40.957483 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 13:22:40.958505 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 13:22:40.959798 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 13:22:40.962932 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 13:22:41.004617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 13:22:41.051485 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 13:22:41.067368 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 15 13:22:41.067780 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 15 13:22:41.078383 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 13:22:41.090174 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 13:22:41.091629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 13:22:41.101163 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 15 13:22:41.119965 udevadm[1246]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 15 13:22:41.150589 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 13:22:41.162135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 13:22:41.184615 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 15 13:22:41.185125 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 15 13:22:41.192230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 13:22:41.984463 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 13:22:42.005228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 13:22:42.058920 systemd-udevd[1256]: Using default interface naming scheme 'v255'. Jan 15 13:22:42.089350 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 13:22:42.100132 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 13:22:42.133334 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 13:22:42.233923 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 13:22:42.274393 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1264) Jan 15 13:22:42.271667 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 15 13:22:42.420796 systemd-networkd[1260]: lo: Link UP Jan 15 13:22:42.420810 systemd-networkd[1260]: lo: Gained carrier Jan 15 13:22:42.424699 systemd-networkd[1260]: Enumeration completed Jan 15 13:22:42.426206 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 13:22:42.426213 systemd-networkd[1260]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 13:22:42.426809 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 13:22:42.434672 systemd-networkd[1260]: eth0: Link UP Jan 15 13:22:42.434694 systemd-networkd[1260]: eth0: Gained carrier Jan 15 13:22:42.434737 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 13:22:42.437836 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 13:22:42.453271 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 13:22:42.453318 systemd-networkd[1260]: eth0: DHCPv4 address 10.230.58.186/30, gateway 10.230.58.185 acquired from 10.230.58.185 Jan 15 13:22:42.459179 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 13:22:42.502182 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 15 13:22:42.510900 kernel: ACPI: button: Power Button [PWRF] Jan 15 13:22:42.542934 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 13:22:42.585932 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 15 13:22:42.605463 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 15 13:22:42.621183 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 15 13:22:42.621579 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 15 13:22:42.650200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 13:22:42.828915 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 15 13:22:42.894503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 13:22:42.909225 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 15 13:22:42.930937 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 13:22:42.968392 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 15 13:22:42.970306 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 13:22:42.978179 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 15 13:22:42.991827 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 13:22:43.030732 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 15 13:22:43.032563 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 13:22:43.033830 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 13:22:43.033896 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 13:22:43.034698 systemd[1]: Reached target machines.target - Containers. Jan 15 13:22:43.037845 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 15 13:22:43.043182 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 13:22:43.061298 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 13:22:43.063057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 13:22:43.068182 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 13:22:43.079152 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 15 13:22:43.086735 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 13:22:43.135162 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 13:22:43.193597 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 13:22:43.198698 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 13:22:43.200293 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 15 13:22:43.216019 kernel: loop0: detected capacity change from 0 to 140768 Jan 15 13:22:43.247219 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 13:22:43.274945 kernel: loop1: detected capacity change from 0 to 8 Jan 15 13:22:43.304541 kernel: loop2: detected capacity change from 0 to 142488 Jan 15 13:22:43.366066 kernel: loop3: detected capacity change from 0 to 211296 Jan 15 13:22:43.440913 kernel: loop4: detected capacity change from 0 to 140768 Jan 15 13:22:43.463943 kernel: loop5: detected capacity change from 0 to 8 Jan 15 13:22:43.468946 kernel: loop6: detected capacity change from 0 to 142488 Jan 15 13:22:43.491423 kernel: loop7: detected capacity change from 0 to 211296 Jan 15 13:22:43.507167 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 15 13:22:43.509811 (sd-merge)[1320]: Merged extensions into '/usr'. Jan 15 13:22:43.515856 systemd[1]: Reloading requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 13:22:43.515955 systemd[1]: Reloading... Jan 15 13:22:43.730229 zram_generator::config[1346]: No configuration found. Jan 15 13:22:43.942834 ldconfig[1303]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 13:22:43.997495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 13:22:44.006692 systemd-networkd[1260]: eth0: Gained IPv6LL Jan 15 13:22:44.097029 systemd[1]: Reloading finished in 580 ms. Jan 15 13:22:44.121974 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 13:22:44.123728 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 13:22:44.125473 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 13:22:44.139186 systemd[1]: Starting ensure-sysext.service... Jan 15 13:22:44.146115 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 13:22:44.157139 systemd[1]: Reloading requested from client PID 1413 ('systemctl') (unit ensure-sysext.service)... Jan 15 13:22:44.157166 systemd[1]: Reloading... Jan 15 13:22:44.197271 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 13:22:44.198120 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 13:22:44.199813 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 13:22:44.201356 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jan 15 13:22:44.201486 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jan 15 13:22:44.208127 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 13:22:44.208147 systemd-tmpfiles[1414]: Skipping /boot Jan 15 13:22:44.228369 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 13:22:44.228390 systemd-tmpfiles[1414]: Skipping /boot Jan 15 13:22:44.291933 zram_generator::config[1442]: No configuration found. Jan 15 13:22:44.455143 systemd-networkd[1260]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8eae:24:19ff:fee6:3aba/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8eae:24:19ff:fee6:3aba/64 assigned by NDisc. Jan 15 13:22:44.455159 systemd-networkd[1260]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 15 13:22:44.471388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 13:22:44.557424 systemd[1]: Reloading finished in 399 ms. Jan 15 13:22:44.585630 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 13:22:44.609211 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 13:22:44.621153 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 13:22:44.629151 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 13:22:44.645841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 13:22:44.662250 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 13:22:44.673827 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:22:44.674195 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 13:22:44.680239 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 13:22:44.693986 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 13:22:44.707366 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 13:22:44.712393 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 13:22:44.712601 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:22:44.728178 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:22:44.728569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 13:22:44.729283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 13:22:44.729577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:22:44.737508 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 13:22:44.743319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 13:22:44.743588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 13:22:44.751606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 13:22:44.752207 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 13:22:44.756428 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 13:22:44.756728 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 13:22:44.767478 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 13:22:44.775784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:22:44.776219 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 13:22:44.788232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 13:22:44.795187 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 13:22:44.804840 augenrules[1542]: No rules Jan 15 13:22:44.805238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 13:22:44.820243 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 13:22:44.821836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 13:22:44.833227 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 13:22:44.837047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 13:22:44.844413 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 13:22:44.846969 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 13:22:44.854326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 13:22:44.854595 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 13:22:44.857310 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 13:22:44.857599 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 13:22:44.860217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 13:22:44.860463 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 13:22:44.862996 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 13:22:44.863381 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 13:22:44.871950 systemd[1]: Finished ensure-sysext.service. Jan 15 13:22:44.881677 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 13:22:44.881805 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 13:22:44.890125 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 15 13:22:44.893023 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 13:22:44.893571 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 13:22:44.904476 systemd-resolved[1514]: Positive Trust Anchors: Jan 15 13:22:44.904504 systemd-resolved[1514]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 13:22:44.904551 systemd-resolved[1514]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 13:22:44.916649 systemd-resolved[1514]: Using system hostname 'srv-yypfq.gb1.brightbox.com'. Jan 15 13:22:44.920494 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 13:22:44.921462 systemd[1]: Reached target network.target - Network. Jan 15 13:22:44.922849 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 13:22:44.924819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 13:22:44.982332 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 15 13:22:44.984144 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 13:22:44.985155 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 13:22:44.986099 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 13:22:44.986972 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 13:22:44.987822 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 13:22:44.987894 systemd[1]: Reached target paths.target - Path Units. Jan 15 13:22:44.988575 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 13:22:44.989632 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 13:22:44.990636 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 13:22:44.991504 systemd[1]: Reached target timers.target - Timer Units. Jan 15 13:22:44.993793 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 13:22:44.997358 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 13:22:45.000478 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 13:22:45.004309 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 13:22:45.005146 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 13:22:45.005855 systemd[1]: Reached target basic.target - Basic System. Jan 15 13:22:45.006875 systemd[1]: System is tainted: cgroupsv1 Jan 15 13:22:45.006974 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 13:22:45.007044 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 13:22:45.011137 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 13:22:45.022810 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 13:22:45.027677 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 13:22:45.039006 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 13:22:45.047852 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 13:22:45.048755 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 13:22:45.057546 jq[1576]: false Jan 15 13:22:45.062278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:22:45.074102 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 13:22:45.081287 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 13:22:45.096454 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 13:22:45.107730 dbus-daemon[1575]: [system] SELinux support is enabled Jan 15 13:22:45.113224 dbus-daemon[1575]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1260 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 15 13:22:45.127557 extend-filesystems[1579]: Found loop4 Jan 15 13:22:45.130812 extend-filesystems[1579]: Found loop5 Jan 15 13:22:45.130812 extend-filesystems[1579]: Found loop6 Jan 15 13:22:45.130812 extend-filesystems[1579]: Found loop7 Jan 15 13:22:45.130812 extend-filesystems[1579]: Found vda Jan 15 13:22:45.130812 extend-filesystems[1579]: Found vda1 Jan 15 13:22:45.130812 extend-filesystems[1579]: Found vda2 Jan 15 13:22:45.130812 extend-filesystems[1579]: Found vda3 Jan 15 13:22:45.130812 extend-filesystems[1579]: Found usr Jan 15 13:22:45.130812 extend-filesystems[1579]: Found vda4 Jan 15 13:22:45.130812 extend-filesystems[1579]: Found vda6 Jan 15 13:22:45.130812 extend-filesystems[1579]: Found vda7 Jan 15 13:22:45.130812 extend-filesystems[1579]: Found vda9 Jan 15 13:22:45.130812 extend-filesystems[1579]: Checking size of /dev/vda9 Jan 15 13:22:45.132165 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 13:22:45.153119 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 13:22:45.172982 extend-filesystems[1579]: Resized partition /dev/vda9 Jan 15 13:22:45.173568 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 13:22:45.179937 extend-filesystems[1605]: resize2fs 1.47.1 (20-May-2024) Jan 15 13:22:45.179198 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 13:22:45.190787 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 13:22:45.198006 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 13:22:45.201912 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 15 13:22:45.206299 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 13:22:45.223499 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 13:22:45.224063 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 13:22:45.232437 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 13:22:45.232802 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 13:22:45.251228 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 13:22:45.257555 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 13:22:45.259008 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 13:22:45.271153 update_engine[1607]: I20250115 13:22:45.270995 1607 main.cc:92] Flatcar Update Engine starting Jan 15 13:22:45.293194 update_engine[1607]: I20250115 13:22:45.293124 1607 update_check_scheduler.cc:74] Next update check in 8m57s Jan 15 13:22:45.297397 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 13:22:45.297476 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 13:22:45.299595 jq[1610]: true Jan 15 13:22:45.302660 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 13:22:45.302703 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 13:22:45.311101 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 13:22:45.314220 systemd[1]: Started update-engine.service - Update Engine. Jan 15 13:22:45.321253 (ntainerd)[1622]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 13:22:45.328221 systemd-timesyncd[1566]: Contacted time server 217.144.90.27:123 (0.flatcar.pool.ntp.org). Jan 15 13:22:45.328295 systemd-timesyncd[1566]: Initial clock synchronization to Wed 2025-01-15 13:22:45.677011 UTC. Jan 15 13:22:45.359201 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 15 13:22:45.375834 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 13:22:45.377094 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 13:22:45.390694 tar[1617]: linux-amd64/helm Jan 15 13:22:45.421866 jq[1630]: true Jan 15 13:22:45.525780 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1634) Jan 15 13:22:45.734227 locksmithd[1639]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 13:22:45.756480 systemd-logind[1604]: Watching system buttons on /dev/input/event2 (Power Button) Jan 15 13:22:45.756558 systemd-logind[1604]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 13:22:45.766450 systemd-logind[1604]: New seat seat0. Jan 15 13:22:45.777148 bash[1660]: Updated "/home/core/.ssh/authorized_keys" Jan 15 13:22:45.774197 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 13:22:45.799925 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 15 13:22:45.813241 systemd[1]: Starting sshkeys.service... Jan 15 13:22:45.815123 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 13:22:45.841723 extend-filesystems[1605]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 15 13:22:45.841723 extend-filesystems[1605]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 15 13:22:45.841723 extend-filesystems[1605]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 15 13:22:45.839419 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 13:22:45.964965 extend-filesystems[1579]: Resized filesystem in /dev/vda9 Jan 15 13:22:45.839861 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 13:22:46.025290 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 15 13:22:46.042144 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 15 13:22:46.393349 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 15 13:22:46.394624 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 15 13:22:46.421631 dbus-daemon[1575]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1636 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 15 13:22:46.468352 systemd[1]: Starting polkit.service - Authorization Manager... Jan 15 13:22:46.630555 containerd[1622]: time="2025-01-15T13:22:46.629335672Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 15 13:22:46.657530 polkitd[1685]: Started polkitd version 121 Jan 15 13:22:46.695835 polkitd[1685]: Loading rules from directory /etc/polkit-1/rules.d Jan 15 13:22:46.695987 polkitd[1685]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 15 13:22:46.717442 polkitd[1685]: Finished loading, compiling and executing 2 rules Jan 15 13:22:46.722079 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 15 13:22:46.722362 systemd[1]: Started polkit.service - Authorization Manager. Jan 15 13:22:46.725408 polkitd[1685]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 15 13:22:46.740757 containerd[1622]: time="2025-01-15T13:22:46.740660804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 15 13:22:46.745954 containerd[1622]: time="2025-01-15T13:22:46.745889241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 15 13:22:46.746021 containerd[1622]: time="2025-01-15T13:22:46.745953721Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 15 13:22:46.746373 containerd[1622]: time="2025-01-15T13:22:46.746334562Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 15 13:22:46.746707 containerd[1622]: time="2025-01-15T13:22:46.746672785Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 15 13:22:46.746849 containerd[1622]: time="2025-01-15T13:22:46.746716540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 15 13:22:46.746896 containerd[1622]: time="2025-01-15T13:22:46.746851578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 13:22:46.746896 containerd[1622]: time="2025-01-15T13:22:46.746880471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 15 13:22:46.748304 containerd[1622]: time="2025-01-15T13:22:46.748051766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 13:22:46.748304 containerd[1622]: time="2025-01-15T13:22:46.748089081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 15 13:22:46.748304 containerd[1622]: time="2025-01-15T13:22:46.748127929Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 13:22:46.748304 containerd[1622]: time="2025-01-15T13:22:46.748147221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 15 13:22:46.748497 containerd[1622]: time="2025-01-15T13:22:46.748402486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 15 13:22:46.749048 containerd[1622]: time="2025-01-15T13:22:46.749017669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 15 13:22:46.749298 containerd[1622]: time="2025-01-15T13:22:46.749249044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 13:22:46.749414 containerd[1622]: time="2025-01-15T13:22:46.749296952Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 15 13:22:46.749533 containerd[1622]: time="2025-01-15T13:22:46.749503948Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 15 13:22:46.749643 containerd[1622]: time="2025-01-15T13:22:46.749608845Z" level=info msg="metadata content store policy set" policy=shared Jan 15 13:22:46.756163 systemd-hostnamed[1636]: Hostname set to (static) Jan 15 13:22:46.758490 containerd[1622]: time="2025-01-15T13:22:46.758338072Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 15 13:22:46.758577 containerd[1622]: time="2025-01-15T13:22:46.758498132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 15 13:22:46.758638 containerd[1622]: time="2025-01-15T13:22:46.758612406Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 15 13:22:46.758696 containerd[1622]: time="2025-01-15T13:22:46.758650223Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 15 13:22:46.758696 containerd[1622]: time="2025-01-15T13:22:46.758683072Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 15 13:22:46.759016 containerd[1622]: time="2025-01-15T13:22:46.758984883Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 15 13:22:46.760360 containerd[1622]: time="2025-01-15T13:22:46.760318512Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 15 13:22:46.760581 containerd[1622]: time="2025-01-15T13:22:46.760541693Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 15 13:22:46.760647 containerd[1622]: time="2025-01-15T13:22:46.760588390Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 15 13:22:46.760647 containerd[1622]: time="2025-01-15T13:22:46.760619562Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 15 13:22:46.760753 containerd[1622]: time="2025-01-15T13:22:46.760649098Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 15 13:22:46.760753 containerd[1622]: time="2025-01-15T13:22:46.760711535Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 15 13:22:46.760863 containerd[1622]: time="2025-01-15T13:22:46.760753522Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 15 13:22:46.760863 containerd[1622]: time="2025-01-15T13:22:46.760804694Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 15 13:22:46.760863 containerd[1622]: time="2025-01-15T13:22:46.760834051Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 15 13:22:46.760993 containerd[1622]: time="2025-01-15T13:22:46.760868335Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 15 13:22:46.760993 containerd[1622]: time="2025-01-15T13:22:46.760898136Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 15 13:22:46.763912 containerd[1622]: time="2025-01-15T13:22:46.763758565Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 15 13:22:46.763912 containerd[1622]: time="2025-01-15T13:22:46.763852573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.763912 containerd[1622]: time="2025-01-15T13:22:46.763908692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764107 containerd[1622]: time="2025-01-15T13:22:46.763954123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764107 containerd[1622]: time="2025-01-15T13:22:46.763985307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764107 containerd[1622]: time="2025-01-15T13:22:46.764016473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764107 containerd[1622]: time="2025-01-15T13:22:46.764047559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764107 containerd[1622]: time="2025-01-15T13:22:46.764078885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764292 containerd[1622]: time="2025-01-15T13:22:46.764115481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764292 containerd[1622]: time="2025-01-15T13:22:46.764144762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764292 containerd[1622]: time="2025-01-15T13:22:46.764182970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764292 containerd[1622]: time="2025-01-15T13:22:46.764213297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764292 containerd[1622]: time="2025-01-15T13:22:46.764259270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764509 containerd[1622]: time="2025-01-15T13:22:46.764299759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764509 containerd[1622]: time="2025-01-15T13:22:46.764340184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 15 13:22:46.764509 containerd[1622]: time="2025-01-15T13:22:46.764391205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764509 containerd[1622]: time="2025-01-15T13:22:46.764428789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764509 containerd[1622]: time="2025-01-15T13:22:46.764459498Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 15 13:22:46.764704 containerd[1622]: time="2025-01-15T13:22:46.764547111Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 15 13:22:46.764704 containerd[1622]: time="2025-01-15T13:22:46.764585071Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 15 13:22:46.764704 containerd[1622]: time="2025-01-15T13:22:46.764611938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 15 13:22:46.764704 containerd[1622]: time="2025-01-15T13:22:46.764651141Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 15 13:22:46.764704 containerd[1622]: time="2025-01-15T13:22:46.764676934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.764946 containerd[1622]: time="2025-01-15T13:22:46.764709413Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 15 13:22:46.764946 containerd[1622]: time="2025-01-15T13:22:46.764735438Z" level=info msg="NRI interface is disabled by configuration." Jan 15 13:22:46.764946 containerd[1622]: time="2025-01-15T13:22:46.764761444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 15 13:22:46.765918 containerd[1622]: time="2025-01-15T13:22:46.765330969Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 15 13:22:46.765918 containerd[1622]: time="2025-01-15T13:22:46.765461812Z" level=info msg="Connect containerd service" Jan 15 13:22:46.765918 containerd[1622]: time="2025-01-15T13:22:46.765554980Z" level=info msg="using legacy CRI server" Jan 15 13:22:46.765918 containerd[1622]: time="2025-01-15T13:22:46.765579163Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 13:22:46.765918 containerd[1622]: time="2025-01-15T13:22:46.765764493Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 15 13:22:46.772613 containerd[1622]: time="2025-01-15T13:22:46.768278635Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 13:22:46.772613 containerd[1622]: time="2025-01-15T13:22:46.768604505Z" level=info msg="Start subscribing containerd event" Jan 15 13:22:46.772613 containerd[1622]: time="2025-01-15T13:22:46.768702252Z" level=info msg="Start recovering state" Jan 15 13:22:46.772613 containerd[1622]: time="2025-01-15T13:22:46.768817314Z" level=info msg="Start event monitor" Jan 15 13:22:46.772613 containerd[1622]: time="2025-01-15T13:22:46.768850694Z" level=info msg="Start snapshots syncer" Jan 15 13:22:46.772613 containerd[1622]: time="2025-01-15T13:22:46.768874245Z" level=info msg="Start cni network conf syncer for default" Jan 15 13:22:46.772613 containerd[1622]: time="2025-01-15T13:22:46.768891278Z" level=info msg="Start streaming server" Jan 15 13:22:46.772613 containerd[1622]: time="2025-01-15T13:22:46.770327564Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 13:22:46.772613 containerd[1622]: time="2025-01-15T13:22:46.770414229Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 13:22:46.772613 containerd[1622]: time="2025-01-15T13:22:46.770526368Z" level=info msg="containerd successfully booted in 0.145737s" Jan 15 13:22:46.770667 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 13:22:46.942997 sshd_keygen[1615]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 13:22:47.026382 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 13:22:47.036503 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 13:22:47.143234 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 13:22:47.143677 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 13:22:47.160471 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 13:22:47.214973 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 13:22:47.229122 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 13:22:47.241427 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 13:22:47.242647 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 13:22:47.616803 tar[1617]: linux-amd64/LICENSE Jan 15 13:22:47.618305 tar[1617]: linux-amd64/README.md Jan 15 13:22:47.637166 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 13:22:48.022206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:22:48.036588 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 13:22:48.578503 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 13:22:48.589403 systemd[1]: Started sshd@0-10.230.58.186:22-147.75.109.163:58846.service - OpenSSH per-connection server daemon (147.75.109.163:58846). Jan 15 13:22:49.079830 kubelet[1731]: E0115 13:22:49.079233 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 13:22:49.082882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:22:49.083315 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 13:22:49.522971 sshd[1737]: Accepted publickey for core from 147.75.109.163 port 58846 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:22:49.527037 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:22:49.550167 systemd-logind[1604]: New session 1 of user core. Jan 15 13:22:49.552453 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 13:22:49.566698 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 13:22:49.623168 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 13:22:49.632419 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 13:22:49.653453 (systemd)[1747]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 13:22:49.805741 systemd[1747]: Queued start job for default target default.target. Jan 15 13:22:49.806870 systemd[1747]: Created slice app.slice - User Application Slice. Jan 15 13:22:49.806931 systemd[1747]: Reached target paths.target - Paths. Jan 15 13:22:49.807085 systemd[1747]: Reached target timers.target - Timers. Jan 15 13:22:49.816046 systemd[1747]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 13:22:49.826022 systemd[1747]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 13:22:49.826099 systemd[1747]: Reached target sockets.target - Sockets. Jan 15 13:22:49.826124 systemd[1747]: Reached target basic.target - Basic System. Jan 15 13:22:49.826190 systemd[1747]: Reached target default.target - Main User Target. Jan 15 13:22:49.826287 systemd[1747]: Startup finished in 163ms. Jan 15 13:22:49.828987 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 13:22:49.841026 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 13:22:50.497399 systemd[1]: Started sshd@1-10.230.58.186:22-147.75.109.163:58854.service - OpenSSH per-connection server daemon (147.75.109.163:58854). Jan 15 13:22:51.398042 sshd[1760]: Accepted publickey for core from 147.75.109.163 port 58854 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:22:51.400960 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:22:51.409088 systemd-logind[1604]: New session 2 of user core. Jan 15 13:22:51.421457 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 13:22:52.034294 sshd[1760]: pam_unix(sshd:session): session closed for user core Jan 15 13:22:52.039452 systemd[1]: sshd@1-10.230.58.186:22-147.75.109.163:58854.service: Deactivated successfully. Jan 15 13:22:52.044135 systemd-logind[1604]: Session 2 logged out. Waiting for processes to exit. Jan 15 13:22:52.045323 systemd[1]: session-2.scope: Deactivated successfully. Jan 15 13:22:52.047338 systemd-logind[1604]: Removed session 2. Jan 15 13:22:52.188364 systemd[1]: Started sshd@2-10.230.58.186:22-147.75.109.163:58860.service - OpenSSH per-connection server daemon (147.75.109.163:58860). Jan 15 13:22:52.309318 login[1715]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 15 13:22:52.320400 login[1716]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 15 13:22:52.323688 systemd-logind[1604]: New session 3 of user core. Jan 15 13:22:52.336397 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 13:22:52.341189 systemd-logind[1604]: New session 4 of user core. Jan 15 13:22:52.344107 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 13:22:52.761152 coreos-metadata[1573]: Jan 15 13:22:52.760 WARN failed to locate config-drive, using the metadata service API instead Jan 15 13:22:52.788985 coreos-metadata[1573]: Jan 15 13:22:52.788 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 15 13:22:52.795805 coreos-metadata[1573]: Jan 15 13:22:52.795 INFO Fetch failed with 404: resource not found Jan 15 13:22:52.795965 coreos-metadata[1573]: Jan 15 13:22:52.795 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 15 13:22:52.796693 coreos-metadata[1573]: Jan 15 13:22:52.796 INFO Fetch successful Jan 15 13:22:52.796924 coreos-metadata[1573]: Jan 15 13:22:52.796 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 15 13:22:52.811371 coreos-metadata[1573]: Jan 15 13:22:52.811 INFO Fetch successful Jan 15 13:22:52.811641 coreos-metadata[1573]: Jan 15 13:22:52.811 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 15 13:22:52.826583 coreos-metadata[1573]: Jan 15 13:22:52.826 INFO Fetch successful Jan 15 13:22:52.826851 coreos-metadata[1573]: Jan 15 13:22:52.826 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 15 13:22:52.842851 coreos-metadata[1573]: Jan 15 13:22:52.842 INFO Fetch successful Jan 15 13:22:52.843133 coreos-metadata[1573]: Jan 15 13:22:52.843 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 15 13:22:52.868585 coreos-metadata[1573]: Jan 15 13:22:52.868 INFO Fetch successful Jan 15 13:22:52.898684 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 13:22:52.899680 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 13:22:53.104305 sshd[1768]: Accepted publickey for core from 147.75.109.163 port 58860 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:22:53.106755 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:22:53.113414 systemd-logind[1604]: New session 5 of user core. Jan 15 13:22:53.122701 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 13:22:53.732374 coreos-metadata[1679]: Jan 15 13:22:53.732 WARN failed to locate config-drive, using the metadata service API instead Jan 15 13:22:53.738133 sshd[1768]: pam_unix(sshd:session): session closed for user core Jan 15 13:22:53.742509 systemd-logind[1604]: Session 5 logged out. Waiting for processes to exit. Jan 15 13:22:53.743734 systemd[1]: sshd@2-10.230.58.186:22-147.75.109.163:58860.service: Deactivated successfully. Jan 15 13:22:53.753267 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 13:22:53.756283 coreos-metadata[1679]: Jan 15 13:22:53.756 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 15 13:22:53.757260 systemd-logind[1604]: Removed session 5. Jan 15 13:22:53.784830 coreos-metadata[1679]: Jan 15 13:22:53.784 INFO Fetch successful Jan 15 13:22:53.785184 coreos-metadata[1679]: Jan 15 13:22:53.785 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 15 13:22:53.822979 coreos-metadata[1679]: Jan 15 13:22:53.822 INFO Fetch successful Jan 15 13:22:53.828587 unknown[1679]: wrote ssh authorized keys file for user: core Jan 15 13:22:53.849993 update-ssh-keys[1819]: Updated "/home/core/.ssh/authorized_keys" Jan 15 13:22:53.850961 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 15 13:22:53.858696 systemd[1]: Finished sshkeys.service. Jan 15 13:22:53.861887 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 13:22:53.863108 systemd[1]: Startup finished in 18.398s (kernel) + 14.410s (userspace) = 32.809s. Jan 15 13:22:59.193608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 13:22:59.203123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:22:59.503116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:22:59.508645 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 13:22:59.610691 kubelet[1837]: E0115 13:22:59.610587 1837 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 13:22:59.615521 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:22:59.615839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 13:23:04.040307 systemd[1]: Started sshd@3-10.230.58.186:22-147.75.109.163:52842.service - OpenSSH per-connection server daemon (147.75.109.163:52842). Jan 15 13:23:04.267248 systemd[1]: Started sshd@4-10.230.58.186:22-120.157.229.219:46032.service - OpenSSH per-connection server daemon (120.157.229.219:46032). Jan 15 13:23:04.278975 sshd[1848]: Connection closed by 120.157.229.219 port 46032 Jan 15 13:23:04.279864 systemd[1]: sshd@4-10.230.58.186:22-120.157.229.219:46032.service: Deactivated successfully. Jan 15 13:23:04.925637 sshd[1846]: Accepted publickey for core from 147.75.109.163 port 52842 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:23:04.927874 sshd[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:23:04.935559 systemd-logind[1604]: New session 6 of user core. Jan 15 13:23:04.952396 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 13:23:05.547140 sshd[1846]: pam_unix(sshd:session): session closed for user core Jan 15 13:23:05.551831 systemd[1]: sshd@3-10.230.58.186:22-147.75.109.163:52842.service: Deactivated successfully. Jan 15 13:23:05.556430 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 13:23:05.557670 systemd-logind[1604]: Session 6 logged out. Waiting for processes to exit. Jan 15 13:23:05.559512 systemd-logind[1604]: Removed session 6. Jan 15 13:23:05.724296 systemd[1]: Started sshd@5-10.230.58.186:22-147.75.109.163:52848.service - OpenSSH per-connection server daemon (147.75.109.163:52848). Jan 15 13:23:06.608612 sshd[1858]: Accepted publickey for core from 147.75.109.163 port 52848 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:23:06.610767 sshd[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:23:06.617444 systemd-logind[1604]: New session 7 of user core. Jan 15 13:23:06.628327 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 13:23:07.222966 sshd[1858]: pam_unix(sshd:session): session closed for user core Jan 15 13:23:07.226758 systemd[1]: sshd@5-10.230.58.186:22-147.75.109.163:52848.service: Deactivated successfully. Jan 15 13:23:07.231299 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 13:23:07.231300 systemd-logind[1604]: Session 7 logged out. Waiting for processes to exit. Jan 15 13:23:07.233417 systemd-logind[1604]: Removed session 7. Jan 15 13:23:07.380678 systemd[1]: Started sshd@6-10.230.58.186:22-147.75.109.163:52850.service - OpenSSH per-connection server daemon (147.75.109.163:52850). Jan 15 13:23:08.263318 sshd[1866]: Accepted publickey for core from 147.75.109.163 port 52850 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:23:08.265522 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:23:08.271915 systemd-logind[1604]: New session 8 of user core. Jan 15 13:23:08.280308 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 13:23:08.885273 sshd[1866]: pam_unix(sshd:session): session closed for user core Jan 15 13:23:08.889015 systemd-logind[1604]: Session 8 logged out. Waiting for processes to exit. Jan 15 13:23:08.889725 systemd[1]: sshd@6-10.230.58.186:22-147.75.109.163:52850.service: Deactivated successfully. Jan 15 13:23:08.894227 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 13:23:08.895113 systemd-logind[1604]: Removed session 8. Jan 15 13:23:09.039251 systemd[1]: Started sshd@7-10.230.58.186:22-147.75.109.163:55382.service - OpenSSH per-connection server daemon (147.75.109.163:55382). Jan 15 13:23:09.693327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 13:23:09.708184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:23:09.861103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:23:09.866258 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 13:23:09.923428 sshd[1874]: Accepted publickey for core from 147.75.109.163 port 55382 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:23:09.925840 sshd[1874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:23:09.934000 systemd-logind[1604]: New session 9 of user core. Jan 15 13:23:09.942533 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 13:23:09.968331 kubelet[1888]: E0115 13:23:09.966697 1888 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 13:23:09.970386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:23:09.970779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 13:23:10.415488 sudo[1899]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 13:23:10.416052 sudo[1899]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 13:23:10.430128 sudo[1899]: pam_unix(sudo:session): session closed for user root Jan 15 13:23:10.575347 sshd[1874]: pam_unix(sshd:session): session closed for user core Jan 15 13:23:10.580123 systemd-logind[1604]: Session 9 logged out. Waiting for processes to exit. Jan 15 13:23:10.581345 systemd[1]: sshd@7-10.230.58.186:22-147.75.109.163:55382.service: Deactivated successfully. Jan 15 13:23:10.586591 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 13:23:10.587448 systemd-logind[1604]: Removed session 9. Jan 15 13:23:10.728307 systemd[1]: Started sshd@8-10.230.58.186:22-147.75.109.163:55386.service - OpenSSH per-connection server daemon (147.75.109.163:55386). Jan 15 13:23:11.606097 sshd[1904]: Accepted publickey for core from 147.75.109.163 port 55386 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:23:11.608426 sshd[1904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:23:11.616154 systemd-logind[1604]: New session 10 of user core. Jan 15 13:23:11.626351 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 13:23:12.082722 sudo[1909]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 13:23:12.083251 sudo[1909]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 13:23:12.088605 sudo[1909]: pam_unix(sudo:session): session closed for user root Jan 15 13:23:12.096510 sudo[1908]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 15 13:23:12.097001 sudo[1908]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 13:23:12.117234 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 15 13:23:12.119796 auditctl[1912]: No rules Jan 15 13:23:12.120616 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 13:23:12.120956 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 15 13:23:12.140379 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 13:23:12.172045 augenrules[1931]: No rules Jan 15 13:23:12.172926 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 13:23:12.176199 sudo[1908]: pam_unix(sudo:session): session closed for user root Jan 15 13:23:12.321174 sshd[1904]: pam_unix(sshd:session): session closed for user core Jan 15 13:23:12.325770 systemd-logind[1604]: Session 10 logged out. Waiting for processes to exit. Jan 15 13:23:12.326447 systemd[1]: sshd@8-10.230.58.186:22-147.75.109.163:55386.service: Deactivated successfully. Jan 15 13:23:12.330668 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 13:23:12.332112 systemd-logind[1604]: Removed session 10. Jan 15 13:23:12.472218 systemd[1]: Started sshd@9-10.230.58.186:22-147.75.109.163:55394.service - OpenSSH per-connection server daemon (147.75.109.163:55394). Jan 15 13:23:13.355450 sshd[1940]: Accepted publickey for core from 147.75.109.163 port 55394 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:23:13.358456 sshd[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:23:13.364754 systemd-logind[1604]: New session 11 of user core. Jan 15 13:23:13.381339 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 13:23:13.833651 sudo[1944]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 13:23:13.834760 sudo[1944]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 13:23:14.751289 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 13:23:14.751703 (dockerd)[1961]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 13:23:15.450061 dockerd[1961]: time="2025-01-15T13:23:15.449963780Z" level=info msg="Starting up" Jan 15 13:23:15.896585 dockerd[1961]: time="2025-01-15T13:23:15.895825175Z" level=info msg="Loading containers: start." Jan 15 13:23:16.055958 kernel: Initializing XFRM netlink socket Jan 15 13:23:16.165100 systemd-networkd[1260]: docker0: Link UP Jan 15 13:23:16.184919 dockerd[1961]: time="2025-01-15T13:23:16.184701747Z" level=info msg="Loading containers: done." Jan 15 13:23:16.217773 dockerd[1961]: time="2025-01-15T13:23:16.216707795Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 13:23:16.217773 dockerd[1961]: time="2025-01-15T13:23:16.216910487Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 15 13:23:16.217773 dockerd[1961]: time="2025-01-15T13:23:16.217122385Z" level=info msg="Daemon has completed initialization" Jan 15 13:23:16.259105 dockerd[1961]: time="2025-01-15T13:23:16.258996654Z" level=info msg="API listen on /run/docker.sock" Jan 15 13:23:16.261810 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 13:23:16.852102 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 15 13:23:17.923146 containerd[1622]: time="2025-01-15T13:23:17.923011128Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 15 13:23:18.742350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount274525921.mount: Deactivated successfully. Jan 15 13:23:20.193625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 13:23:20.203633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:23:20.438286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:23:20.456155 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 13:23:20.638417 kubelet[2179]: E0115 13:23:20.638289 2179 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 13:23:20.642003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:23:20.643273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 13:23:21.934825 containerd[1622]: time="2025-01-15T13:23:21.934728811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:21.935630 containerd[1622]: time="2025-01-15T13:23:21.935554540Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Jan 15 13:23:21.937922 containerd[1622]: time="2025-01-15T13:23:21.936896738Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:21.941404 containerd[1622]: time="2025-01-15T13:23:21.941326164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:21.943612 containerd[1622]: time="2025-01-15T13:23:21.943178051Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 4.020011232s" Jan 15 13:23:21.943612 containerd[1622]: time="2025-01-15T13:23:21.943291782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 15 13:23:21.987064 containerd[1622]: time="2025-01-15T13:23:21.987010566Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 15 13:23:24.921959 containerd[1622]: time="2025-01-15T13:23:24.920171191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:24.921959 containerd[1622]: time="2025-01-15T13:23:24.921777780Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Jan 15 13:23:24.923312 containerd[1622]: time="2025-01-15T13:23:24.923278777Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:24.927861 containerd[1622]: time="2025-01-15T13:23:24.927813268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:24.929861 containerd[1622]: time="2025-01-15T13:23:24.929793565Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.942719819s" Jan 15 13:23:24.930048 containerd[1622]: time="2025-01-15T13:23:24.930017551Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 15 13:23:24.964235 containerd[1622]: time="2025-01-15T13:23:24.964180312Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 15 13:23:27.063096 containerd[1622]: time="2025-01-15T13:23:27.062905966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:27.065811 containerd[1622]: time="2025-01-15T13:23:27.065703437Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Jan 15 13:23:27.066563 containerd[1622]: time="2025-01-15T13:23:27.066511297Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:27.071913 containerd[1622]: time="2025-01-15T13:23:27.070901960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:27.074197 containerd[1622]: time="2025-01-15T13:23:27.072735330Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 2.108252991s" Jan 15 13:23:27.074197 containerd[1622]: time="2025-01-15T13:23:27.072784550Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 15 13:23:27.108983 containerd[1622]: time="2025-01-15T13:23:27.108749902Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 15 13:23:29.062026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711997002.mount: Deactivated successfully. Jan 15 13:23:29.936979 containerd[1622]: time="2025-01-15T13:23:29.936882842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:29.939590 containerd[1622]: time="2025-01-15T13:23:29.939522063Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 15 13:23:29.939867 containerd[1622]: time="2025-01-15T13:23:29.939823333Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:29.945628 containerd[1622]: time="2025-01-15T13:23:29.945584006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:29.947006 containerd[1622]: time="2025-01-15T13:23:29.946965155Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.838161207s" Jan 15 13:23:29.947091 containerd[1622]: time="2025-01-15T13:23:29.947009860Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 15 13:23:29.983072 containerd[1622]: time="2025-01-15T13:23:29.983027358Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 15 13:23:30.058426 update_engine[1607]: I20250115 13:23:30.058205 1607 update_attempter.cc:509] Updating boot flags... Jan 15 13:23:30.121979 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2230) Jan 15 13:23:30.222938 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2232) Jan 15 13:23:30.643257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268908495.mount: Deactivated successfully. Jan 15 13:23:30.693405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 15 13:23:30.705923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:23:30.947221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:23:30.949132 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 13:23:31.110145 kubelet[2261]: E0115 13:23:31.109987 2261 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 13:23:31.113461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:23:31.117153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 13:23:32.408209 containerd[1622]: time="2025-01-15T13:23:32.408100053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:32.410017 containerd[1622]: time="2025-01-15T13:23:32.409964142Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 15 13:23:32.412532 containerd[1622]: time="2025-01-15T13:23:32.412491211Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:32.420377 containerd[1622]: time="2025-01-15T13:23:32.420301710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:32.425011 containerd[1622]: time="2025-01-15T13:23:32.424960799Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.438429305s" Jan 15 13:23:32.425101 containerd[1622]: time="2025-01-15T13:23:32.425061017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 15 13:23:32.467317 containerd[1622]: time="2025-01-15T13:23:32.467271190Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 15 13:23:33.200923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290687770.mount: Deactivated successfully. Jan 15 13:23:33.206771 containerd[1622]: time="2025-01-15T13:23:33.206552551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:33.207746 containerd[1622]: time="2025-01-15T13:23:33.207699781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 15 13:23:33.209914 containerd[1622]: time="2025-01-15T13:23:33.208379881Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:33.211645 containerd[1622]: time="2025-01-15T13:23:33.211573553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:33.213158 containerd[1622]: time="2025-01-15T13:23:33.212983118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 745.430881ms" Jan 15 13:23:33.213158 containerd[1622]: time="2025-01-15T13:23:33.213036425Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 15 13:23:33.243325 containerd[1622]: time="2025-01-15T13:23:33.243228172Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 15 13:23:33.903863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount486893309.mount: Deactivated successfully. Jan 15 13:23:36.855309 containerd[1622]: time="2025-01-15T13:23:36.855099878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:36.856836 containerd[1622]: time="2025-01-15T13:23:36.856779049Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 15 13:23:36.857558 containerd[1622]: time="2025-01-15T13:23:36.857479180Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:36.861873 containerd[1622]: time="2025-01-15T13:23:36.861786377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:23:36.864139 containerd[1622]: time="2025-01-15T13:23:36.863936435Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.620644746s" Jan 15 13:23:36.864139 containerd[1622]: time="2025-01-15T13:23:36.863993532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 15 13:23:40.799677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:23:40.808179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:23:40.847094 systemd[1]: Reloading requested from client PID 2425 ('systemctl') (unit session-11.scope)... Jan 15 13:23:40.847157 systemd[1]: Reloading... Jan 15 13:23:41.027993 zram_generator::config[2460]: No configuration found. Jan 15 13:23:41.229084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 13:23:41.334320 systemd[1]: Reloading finished in 486 ms. Jan 15 13:23:41.406861 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 13:23:41.407078 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 13:23:41.407530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:23:41.421828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:23:41.565100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:23:41.576494 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 13:23:41.681202 kubelet[2543]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 13:23:41.682139 kubelet[2543]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 13:23:41.682139 kubelet[2543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 13:23:41.685021 kubelet[2543]: I0115 13:23:41.684380 2543 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 13:23:41.967781 kubelet[2543]: I0115 13:23:41.966151 2543 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 15 13:23:41.967781 kubelet[2543]: I0115 13:23:41.966218 2543 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 13:23:41.967781 kubelet[2543]: I0115 13:23:41.966545 2543 server.go:919] "Client rotation is on, will bootstrap in background" Jan 15 13:23:41.995843 kubelet[2543]: I0115 13:23:41.995579 2543 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 13:23:41.996301 kubelet[2543]: E0115 13:23:41.996276 2543 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.58.186:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:42.014816 kubelet[2543]: I0115 13:23:42.014789 2543 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 13:23:42.018607 kubelet[2543]: I0115 13:23:42.018571 2543 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 13:23:42.019845 kubelet[2543]: I0115 13:23:42.019814 2543 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 13:23:42.020264 kubelet[2543]: I0115 13:23:42.020227 2543 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 13:23:42.020412 kubelet[2543]: I0115 13:23:42.020390 2543 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 13:23:42.020772 kubelet[2543]: I0115 13:23:42.020748 2543 state_mem.go:36] "Initialized new in-memory state store" Jan 15 13:23:42.021146 kubelet[2543]: I0115 13:23:42.021124 2543 kubelet.go:396] "Attempting to sync node with API server" Jan 15 13:23:42.021304 kubelet[2543]: I0115 13:23:42.021283 2543 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 13:23:42.021504 kubelet[2543]: I0115 13:23:42.021482 2543 kubelet.go:312] "Adding apiserver pod source" Jan 15 13:23:42.021676 kubelet[2543]: I0115 13:23:42.021653 2543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 13:23:42.026639 kubelet[2543]: W0115 13:23:42.026571 2543 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.58.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-yypfq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:42.026800 kubelet[2543]: E0115 13:23:42.026776 2543 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.58.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-yypfq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:42.027309 kubelet[2543]: I0115 13:23:42.027284 2543 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 13:23:42.029368 kubelet[2543]: W0115 13:23:42.029282 2543 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.58.186:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:42.029368 kubelet[2543]: E0115 13:23:42.029341 2543 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.58.186:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:42.032502 kubelet[2543]: I0115 13:23:42.032477 2543 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 13:23:42.034696 kubelet[2543]: W0115 13:23:42.034670 2543 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 13:23:42.036585 kubelet[2543]: I0115 13:23:42.036322 2543 server.go:1256] "Started kubelet" Jan 15 13:23:42.043271 kubelet[2543]: I0115 13:23:42.043245 2543 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 13:23:42.045177 kubelet[2543]: I0115 13:23:42.045151 2543 server.go:461] "Adding debug handlers to kubelet server" Jan 15 13:23:42.047219 kubelet[2543]: I0115 13:23:42.047188 2543 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 13:23:42.047908 kubelet[2543]: I0115 13:23:42.047714 2543 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 13:23:42.049222 kubelet[2543]: I0115 13:23:42.049195 2543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 13:23:42.052912 kubelet[2543]: E0115 13:23:42.052656 2543 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.58.186:6443/api/v1/namespaces/default/events\": dial tcp 10.230.58.186:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-yypfq.gb1.brightbox.com.181ae07a4400d416 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-yypfq.gb1.brightbox.com,UID:srv-yypfq.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-yypfq.gb1.brightbox.com,},FirstTimestamp:2025-01-15 13:23:42.036284438 +0000 UTC m=+0.453461201,LastTimestamp:2025-01-15 13:23:42.036284438 +0000 UTC m=+0.453461201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-yypfq.gb1.brightbox.com,}" Jan 15 13:23:42.059538 kubelet[2543]: E0115 13:23:42.059194 2543 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-yypfq.gb1.brightbox.com\" not found" Jan 15 13:23:42.059749 kubelet[2543]: I0115 13:23:42.059726 2543 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 13:23:42.060077 kubelet[2543]: I0115 13:23:42.060052 2543 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 15 13:23:42.060922 kubelet[2543]: I0115 13:23:42.060312 2543 reconciler_new.go:29] "Reconciler: start to sync state" Jan 15 13:23:42.060922 kubelet[2543]: W0115 13:23:42.060846 2543 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.58.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:42.061181 kubelet[2543]: E0115 13:23:42.061158 2543 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.58.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:42.064192 kubelet[2543]: E0115 13:23:42.064167 2543 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 13:23:42.064560 kubelet[2543]: I0115 13:23:42.064537 2543 factory.go:221] Registration of the systemd container factory successfully Jan 15 13:23:42.064794 kubelet[2543]: I0115 13:23:42.064767 2543 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 13:23:42.065252 kubelet[2543]: E0115 13:23:42.065227 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-yypfq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.186:6443: connect: connection refused" interval="200ms" Jan 15 13:23:42.068256 kubelet[2543]: I0115 13:23:42.068233 2543 factory.go:221] Registration of the containerd container factory successfully Jan 15 13:23:42.096272 kubelet[2543]: I0115 13:23:42.096230 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 13:23:42.098974 kubelet[2543]: I0115 13:23:42.098951 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 13:23:42.099575 kubelet[2543]: I0115 13:23:42.099141 2543 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 13:23:42.099575 kubelet[2543]: I0115 13:23:42.099193 2543 kubelet.go:2329] "Starting kubelet main sync loop" Jan 15 13:23:42.099575 kubelet[2543]: E0115 13:23:42.099308 2543 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 13:23:42.104375 kubelet[2543]: W0115 13:23:42.104227 2543 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.58.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:42.104375 kubelet[2543]: E0115 13:23:42.104275 2543 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.58.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:42.111246 kubelet[2543]: I0115 13:23:42.111201 2543 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 13:23:42.111380 kubelet[2543]: I0115 13:23:42.111265 2543 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 13:23:42.111380 kubelet[2543]: I0115 13:23:42.111301 2543 state_mem.go:36] "Initialized new in-memory state store" Jan 15 13:23:42.113301 kubelet[2543]: I0115 13:23:42.113277 2543 policy_none.go:49] "None policy: Start" Jan 15 13:23:42.114445 kubelet[2543]: I0115 13:23:42.114380 2543 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 13:23:42.114571 kubelet[2543]: I0115 13:23:42.114530 2543 state_mem.go:35] "Initializing new in-memory state store" Jan 15 13:23:42.123915 kubelet[2543]: I0115 13:23:42.123611 2543 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 13:23:42.124203 kubelet[2543]: I0115 13:23:42.124057 2543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 13:23:42.128383 kubelet[2543]: E0115 13:23:42.128270 2543 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-yypfq.gb1.brightbox.com\" not found" Jan 15 13:23:42.163490 kubelet[2543]: I0115 13:23:42.163451 2543 kubelet_node_status.go:73] "Attempting to register node" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.164093 kubelet[2543]: E0115 13:23:42.163956 2543 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.58.186:6443/api/v1/nodes\": dial tcp 10.230.58.186:6443: connect: connection refused" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.200577 kubelet[2543]: I0115 13:23:42.200276 2543 topology_manager.go:215] "Topology Admit Handler" podUID="e321b598f94851ca42aa4697fedd3009" podNamespace="kube-system" podName="kube-apiserver-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.202555 kubelet[2543]: I0115 13:23:42.202527 2543 topology_manager.go:215] "Topology Admit Handler" podUID="ef88e8ff83d2befbc440afcd6867f1b3" podNamespace="kube-system" podName="kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.204860 kubelet[2543]: I0115 13:23:42.204835 2543 topology_manager.go:215] "Topology Admit Handler" podUID="7c004ba3d8a0f3348d9c529778084296" podNamespace="kube-system" podName="kube-scheduler-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.262181 kubelet[2543]: I0115 13:23:42.261838 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e321b598f94851ca42aa4697fedd3009-ca-certs\") pod \"kube-apiserver-srv-yypfq.gb1.brightbox.com\" (UID: \"e321b598f94851ca42aa4697fedd3009\") " pod="kube-system/kube-apiserver-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.262181 kubelet[2543]: I0115 13:23:42.261928 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ef88e8ff83d2befbc440afcd6867f1b3-flexvolume-dir\") pod \"kube-controller-manager-srv-yypfq.gb1.brightbox.com\" (UID: \"ef88e8ff83d2befbc440afcd6867f1b3\") " pod="kube-system/kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.262181 kubelet[2543]: I0115 13:23:42.261978 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef88e8ff83d2befbc440afcd6867f1b3-kubeconfig\") pod \"kube-controller-manager-srv-yypfq.gb1.brightbox.com\" (UID: \"ef88e8ff83d2befbc440afcd6867f1b3\") " pod="kube-system/kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.262181 kubelet[2543]: I0115 13:23:42.262022 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef88e8ff83d2befbc440afcd6867f1b3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-yypfq.gb1.brightbox.com\" (UID: \"ef88e8ff83d2befbc440afcd6867f1b3\") " pod="kube-system/kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.262181 kubelet[2543]: I0115 13:23:42.262078 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c004ba3d8a0f3348d9c529778084296-kubeconfig\") pod \"kube-scheduler-srv-yypfq.gb1.brightbox.com\" (UID: \"7c004ba3d8a0f3348d9c529778084296\") " pod="kube-system/kube-scheduler-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.262551 kubelet[2543]: I0115 13:23:42.262157 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e321b598f94851ca42aa4697fedd3009-k8s-certs\") pod \"kube-apiserver-srv-yypfq.gb1.brightbox.com\" (UID: \"e321b598f94851ca42aa4697fedd3009\") " pod="kube-system/kube-apiserver-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.262551 kubelet[2543]: I0115 13:23:42.262196 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e321b598f94851ca42aa4697fedd3009-usr-share-ca-certificates\") pod \"kube-apiserver-srv-yypfq.gb1.brightbox.com\" (UID: \"e321b598f94851ca42aa4697fedd3009\") " pod="kube-system/kube-apiserver-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.262551 kubelet[2543]: I0115 13:23:42.262264 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef88e8ff83d2befbc440afcd6867f1b3-ca-certs\") pod \"kube-controller-manager-srv-yypfq.gb1.brightbox.com\" (UID: \"ef88e8ff83d2befbc440afcd6867f1b3\") " pod="kube-system/kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.262551 kubelet[2543]: I0115 13:23:42.262311 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef88e8ff83d2befbc440afcd6867f1b3-k8s-certs\") pod \"kube-controller-manager-srv-yypfq.gb1.brightbox.com\" (UID: \"ef88e8ff83d2befbc440afcd6867f1b3\") " pod="kube-system/kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.266188 kubelet[2543]: E0115 13:23:42.266149 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-yypfq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.186:6443: connect: connection refused" interval="400ms" Jan 15 13:23:42.367458 kubelet[2543]: I0115 13:23:42.367049 2543 kubelet_node_status.go:73] "Attempting to register node" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.367581 kubelet[2543]: E0115 13:23:42.367464 2543 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.58.186:6443/api/v1/nodes\": dial tcp 10.230.58.186:6443: connect: connection refused" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.516394 containerd[1622]: time="2025-01-15T13:23:42.515969898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-yypfq.gb1.brightbox.com,Uid:ef88e8ff83d2befbc440afcd6867f1b3,Namespace:kube-system,Attempt:0,}" Jan 15 13:23:42.516394 containerd[1622]: time="2025-01-15T13:23:42.515970098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-yypfq.gb1.brightbox.com,Uid:e321b598f94851ca42aa4697fedd3009,Namespace:kube-system,Attempt:0,}" Jan 15 13:23:42.525494 containerd[1622]: time="2025-01-15T13:23:42.525458374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-yypfq.gb1.brightbox.com,Uid:7c004ba3d8a0f3348d9c529778084296,Namespace:kube-system,Attempt:0,}" Jan 15 13:23:42.667726 kubelet[2543]: E0115 13:23:42.667673 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-yypfq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.186:6443: connect: connection refused" interval="800ms" Jan 15 13:23:42.770990 kubelet[2543]: I0115 13:23:42.770828 2543 kubelet_node_status.go:73] "Attempting to register node" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:42.771521 kubelet[2543]: E0115 13:23:42.771289 2543 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.58.186:6443/api/v1/nodes\": dial tcp 10.230.58.186:6443: connect: connection refused" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:43.021169 kubelet[2543]: E0115 13:23:43.021038 2543 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.58.186:6443/api/v1/namespaces/default/events\": dial tcp 10.230.58.186:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-yypfq.gb1.brightbox.com.181ae07a4400d416 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-yypfq.gb1.brightbox.com,UID:srv-yypfq.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-yypfq.gb1.brightbox.com,},FirstTimestamp:2025-01-15 13:23:42.036284438 +0000 UTC m=+0.453461201,LastTimestamp:2025-01-15 13:23:42.036284438 +0000 UTC m=+0.453461201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-yypfq.gb1.brightbox.com,}" Jan 15 13:23:43.176544 kubelet[2543]: W0115 13:23:43.176274 2543 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.58.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:43.176544 kubelet[2543]: E0115 13:23:43.176349 2543 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.58.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:43.197072 kubelet[2543]: W0115 13:23:43.196977 2543 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.58.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:43.197072 kubelet[2543]: E0115 13:23:43.197037 2543 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.58.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:43.199299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082047728.mount: Deactivated successfully. Jan 15 13:23:43.204571 containerd[1622]: time="2025-01-15T13:23:43.204485840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 13:23:43.206704 containerd[1622]: time="2025-01-15T13:23:43.206619949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 13:23:43.208017 containerd[1622]: time="2025-01-15T13:23:43.207965173Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 13:23:43.209958 containerd[1622]: time="2025-01-15T13:23:43.209735318Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 13:23:43.209958 containerd[1622]: time="2025-01-15T13:23:43.209827295Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 13:23:43.211023 containerd[1622]: time="2025-01-15T13:23:43.210986619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 13:23:43.211782 containerd[1622]: time="2025-01-15T13:23:43.211709590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 15 13:23:43.215232 containerd[1622]: time="2025-01-15T13:23:43.215166826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 13:23:43.219938 containerd[1622]: time="2025-01-15T13:23:43.217956293Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 692.422675ms" Jan 15 13:23:43.221098 containerd[1622]: time="2025-01-15T13:23:43.221061869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 704.916454ms" Jan 15 13:23:43.232094 containerd[1622]: time="2025-01-15T13:23:43.232056632Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 715.911396ms" Jan 15 13:23:43.267561 kubelet[2543]: W0115 13:23:43.267498 2543 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.58.186:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:43.267810 kubelet[2543]: E0115 13:23:43.267788 2543 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.58.186:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:43.410612 containerd[1622]: time="2025-01-15T13:23:43.409857971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:23:43.410612 containerd[1622]: time="2025-01-15T13:23:43.410024635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:23:43.410612 containerd[1622]: time="2025-01-15T13:23:43.410052988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:23:43.412155 containerd[1622]: time="2025-01-15T13:23:43.411511237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:23:43.421441 containerd[1622]: time="2025-01-15T13:23:43.421188142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:23:43.421441 containerd[1622]: time="2025-01-15T13:23:43.421341437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:23:43.421775 containerd[1622]: time="2025-01-15T13:23:43.421698328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:23:43.422576 containerd[1622]: time="2025-01-15T13:23:43.422371397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:23:43.428912 containerd[1622]: time="2025-01-15T13:23:43.428738481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:23:43.429078 containerd[1622]: time="2025-01-15T13:23:43.428912155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:23:43.429078 containerd[1622]: time="2025-01-15T13:23:43.428952970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:23:43.429920 containerd[1622]: time="2025-01-15T13:23:43.429657701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:23:43.472727 kubelet[2543]: E0115 13:23:43.472687 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-yypfq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.186:6443: connect: connection refused" interval="1.6s" Jan 15 13:23:43.578570 kubelet[2543]: I0115 13:23:43.578534 2543 kubelet_node_status.go:73] "Attempting to register node" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:43.579421 kubelet[2543]: E0115 13:23:43.579399 2543 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.58.186:6443/api/v1/nodes\": dial tcp 10.230.58.186:6443: connect: connection refused" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:43.593662 containerd[1622]: time="2025-01-15T13:23:43.593497084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-yypfq.gb1.brightbox.com,Uid:7c004ba3d8a0f3348d9c529778084296,Namespace:kube-system,Attempt:0,} returns sandbox id \"f36913eb2521a655c2149ee63168724b3d8d2670bf1f66a40e885525ed5f6b51\"" Jan 15 13:23:43.605224 containerd[1622]: time="2025-01-15T13:23:43.605180219Z" level=info msg="CreateContainer within sandbox \"f36913eb2521a655c2149ee63168724b3d8d2670bf1f66a40e885525ed5f6b51\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 13:23:43.608231 containerd[1622]: time="2025-01-15T13:23:43.608119726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-yypfq.gb1.brightbox.com,Uid:ef88e8ff83d2befbc440afcd6867f1b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"03ee274230073d6c9833041c8b3eddd9da77e8374b01d54240ce643d9115d615\"" Jan 15 13:23:43.613763 kubelet[2543]: W0115 13:23:43.613648 2543 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.58.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-yypfq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:43.614309 kubelet[2543]: E0115 13:23:43.613841 2543 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.58.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-yypfq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:43.614378 containerd[1622]: time="2025-01-15T13:23:43.614094587Z" level=info msg="CreateContainer within sandbox \"03ee274230073d6c9833041c8b3eddd9da77e8374b01d54240ce643d9115d615\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 13:23:43.615163 containerd[1622]: time="2025-01-15T13:23:43.615106749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-yypfq.gb1.brightbox.com,Uid:e321b598f94851ca42aa4697fedd3009,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b9f032fda0e90949953a7fc824726b4ad35589c24b4d306c57f30440751ed1f\"" Jan 15 13:23:43.619694 containerd[1622]: time="2025-01-15T13:23:43.619552149Z" level=info msg="CreateContainer within sandbox \"5b9f032fda0e90949953a7fc824726b4ad35589c24b4d306c57f30440751ed1f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 13:23:43.631253 containerd[1622]: time="2025-01-15T13:23:43.631044641Z" level=info msg="CreateContainer within sandbox \"f36913eb2521a655c2149ee63168724b3d8d2670bf1f66a40e885525ed5f6b51\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bb88f9c1099ecf29429ac86df8dfe7967410de9ea7bc00ffc7769f4e323b60e5\"" Jan 15 13:23:43.633594 containerd[1622]: time="2025-01-15T13:23:43.633320808Z" level=info msg="StartContainer for \"bb88f9c1099ecf29429ac86df8dfe7967410de9ea7bc00ffc7769f4e323b60e5\"" Jan 15 13:23:43.637400 containerd[1622]: time="2025-01-15T13:23:43.637361967Z" level=info msg="CreateContainer within sandbox \"03ee274230073d6c9833041c8b3eddd9da77e8374b01d54240ce643d9115d615\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"242fad244edba8efdda8d65599583ed17180430afde350c9bb7efcd883c07191\"" Jan 15 13:23:43.638311 containerd[1622]: time="2025-01-15T13:23:43.637869157Z" level=info msg="StartContainer for \"242fad244edba8efdda8d65599583ed17180430afde350c9bb7efcd883c07191\"" Jan 15 13:23:43.641181 containerd[1622]: time="2025-01-15T13:23:43.641145281Z" level=info msg="CreateContainer within sandbox \"5b9f032fda0e90949953a7fc824726b4ad35589c24b4d306c57f30440751ed1f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f14b5d9121a1f198447735f26cc34a92b56cc5476a6122ea341a57a687f30383\"" Jan 15 13:23:43.643126 containerd[1622]: time="2025-01-15T13:23:43.643093769Z" level=info msg="StartContainer for \"f14b5d9121a1f198447735f26cc34a92b56cc5476a6122ea341a57a687f30383\"" Jan 15 13:23:43.801147 containerd[1622]: time="2025-01-15T13:23:43.801098443Z" level=info msg="StartContainer for \"f14b5d9121a1f198447735f26cc34a92b56cc5476a6122ea341a57a687f30383\" returns successfully" Jan 15 13:23:43.829122 containerd[1622]: time="2025-01-15T13:23:43.828357475Z" level=info msg="StartContainer for \"242fad244edba8efdda8d65599583ed17180430afde350c9bb7efcd883c07191\" returns successfully" Jan 15 13:23:43.836692 containerd[1622]: time="2025-01-15T13:23:43.836326228Z" level=info msg="StartContainer for \"bb88f9c1099ecf29429ac86df8dfe7967410de9ea7bc00ffc7769f4e323b60e5\" returns successfully" Jan 15 13:23:44.065948 kubelet[2543]: E0115 13:23:44.065600 2543 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.58.186:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.58.186:6443: connect: connection refused Jan 15 13:23:45.184247 kubelet[2543]: I0115 13:23:45.184200 2543 kubelet_node_status.go:73] "Attempting to register node" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:46.689684 kubelet[2543]: E0115 13:23:46.689607 2543 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-yypfq.gb1.brightbox.com\" not found" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:46.764153 kubelet[2543]: I0115 13:23:46.763936 2543 kubelet_node_status.go:76] "Successfully registered node" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:47.028556 kubelet[2543]: I0115 13:23:47.028408 2543 apiserver.go:52] "Watching apiserver" Jan 15 13:23:47.061040 kubelet[2543]: I0115 13:23:47.060949 2543 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 15 13:23:48.353942 kubelet[2543]: W0115 13:23:48.353159 2543 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 13:23:49.764572 systemd[1]: Reloading requested from client PID 2823 ('systemctl') (unit session-11.scope)... Jan 15 13:23:49.764603 systemd[1]: Reloading... Jan 15 13:23:49.863106 zram_generator::config[2859]: No configuration found. Jan 15 13:23:50.101299 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 13:23:50.214846 systemd[1]: Reloading finished in 449 ms. Jan 15 13:23:50.266592 kubelet[2543]: I0115 13:23:50.266508 2543 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 13:23:50.266727 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:23:50.283574 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 13:23:50.284158 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:23:50.294692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 13:23:50.500981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 13:23:50.522214 (kubelet)[2936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 13:23:50.697216 kubelet[2936]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 13:23:50.697216 kubelet[2936]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 13:23:50.697216 kubelet[2936]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 13:23:50.697835 kubelet[2936]: I0115 13:23:50.697597 2936 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 13:23:50.728627 kubelet[2936]: I0115 13:23:50.728562 2936 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 15 13:23:50.728627 kubelet[2936]: I0115 13:23:50.728614 2936 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 13:23:50.730385 kubelet[2936]: I0115 13:23:50.730084 2936 server.go:919] "Client rotation is on, will bootstrap in background" Jan 15 13:23:50.734413 kubelet[2936]: I0115 13:23:50.733509 2936 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 13:23:50.754147 kubelet[2936]: I0115 13:23:50.752852 2936 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 13:23:50.767631 kubelet[2936]: I0115 13:23:50.767595 2936 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 13:23:50.768686 kubelet[2936]: I0115 13:23:50.768618 2936 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 13:23:50.769427 kubelet[2936]: I0115 13:23:50.769045 2936 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 13:23:50.769427 kubelet[2936]: I0115 13:23:50.769125 2936 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 13:23:50.769427 kubelet[2936]: I0115 13:23:50.769144 2936 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 13:23:50.769427 kubelet[2936]: I0115 13:23:50.769315 2936 state_mem.go:36] "Initialized new in-memory state store" Jan 15 13:23:50.769915 kubelet[2936]: I0115 13:23:50.769531 2936 kubelet.go:396] "Attempting to sync node with API server" Jan 15 13:23:50.769915 kubelet[2936]: I0115 13:23:50.769843 2936 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 13:23:50.770151 kubelet[2936]: I0115 13:23:50.769933 2936 kubelet.go:312] "Adding apiserver pod source" Jan 15 13:23:50.770151 kubelet[2936]: I0115 13:23:50.769957 2936 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 13:23:50.771857 kubelet[2936]: I0115 13:23:50.771353 2936 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 13:23:50.771857 kubelet[2936]: I0115 13:23:50.771664 2936 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 13:23:50.773724 kubelet[2936]: I0115 13:23:50.773686 2936 server.go:1256] "Started kubelet" Jan 15 13:23:50.793975 kubelet[2936]: I0115 13:23:50.792819 2936 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 13:23:50.810024 kubelet[2936]: I0115 13:23:50.809160 2936 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 13:23:50.812206 kubelet[2936]: I0115 13:23:50.811997 2936 server.go:461] "Adding debug handlers to kubelet server" Jan 15 13:23:50.815981 kubelet[2936]: I0115 13:23:50.814746 2936 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 13:23:50.815981 kubelet[2936]: I0115 13:23:50.815838 2936 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 13:23:50.823258 kubelet[2936]: I0115 13:23:50.823218 2936 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 13:23:50.837629 kubelet[2936]: I0115 13:23:50.837594 2936 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 15 13:23:50.838371 kubelet[2936]: I0115 13:23:50.838342 2936 reconciler_new.go:29] "Reconciler: start to sync state" Jan 15 13:23:50.847141 kubelet[2936]: I0115 13:23:50.847107 2936 factory.go:221] Registration of the systemd container factory successfully Jan 15 13:23:50.847367 kubelet[2936]: I0115 13:23:50.847309 2936 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 13:23:50.858614 kubelet[2936]: I0115 13:23:50.858545 2936 factory.go:221] Registration of the containerd container factory successfully Jan 15 13:23:50.875708 kubelet[2936]: I0115 13:23:50.875683 2936 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 13:23:50.882465 kubelet[2936]: I0115 13:23:50.882441 2936 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 13:23:50.882609 kubelet[2936]: I0115 13:23:50.882589 2936 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 13:23:50.882724 kubelet[2936]: I0115 13:23:50.882705 2936 kubelet.go:2329] "Starting kubelet main sync loop" Jan 15 13:23:50.883431 kubelet[2936]: E0115 13:23:50.882913 2936 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 13:23:50.889296 kubelet[2936]: E0115 13:23:50.889257 2936 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 13:23:50.949772 kubelet[2936]: I0115 13:23:50.949430 2936 kubelet_node_status.go:73] "Attempting to register node" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:50.965596 kubelet[2936]: I0115 13:23:50.965375 2936 kubelet_node_status.go:112] "Node was previously registered" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:50.965695 kubelet[2936]: I0115 13:23:50.965660 2936 kubelet_node_status.go:76] "Successfully registered node" node="srv-yypfq.gb1.brightbox.com" Jan 15 13:23:50.987441 kubelet[2936]: E0115 13:23:50.986862 2936 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 13:23:51.059700 kubelet[2936]: I0115 13:23:51.058944 2936 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 13:23:51.059700 kubelet[2936]: I0115 13:23:51.058989 2936 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 13:23:51.059700 kubelet[2936]: I0115 13:23:51.059034 2936 state_mem.go:36] "Initialized new in-memory state store" Jan 15 13:23:51.059700 kubelet[2936]: I0115 13:23:51.059339 2936 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 13:23:51.059700 kubelet[2936]: I0115 13:23:51.059381 2936 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 13:23:51.059700 kubelet[2936]: I0115 13:23:51.059401 2936 policy_none.go:49] "None policy: Start" Jan 15 13:23:51.062965 kubelet[2936]: I0115 13:23:51.062877 2936 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 13:23:51.062965 kubelet[2936]: I0115 13:23:51.062936 2936 state_mem.go:35] "Initializing new in-memory state store" Jan 15 13:23:51.063254 kubelet[2936]: I0115 13:23:51.063226 2936 state_mem.go:75] "Updated machine memory state" Jan 15 13:23:51.065539 kubelet[2936]: I0115 13:23:51.065489 2936 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 13:23:51.071794 kubelet[2936]: I0115 13:23:51.071565 2936 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 13:23:51.188189 kubelet[2936]: I0115 13:23:51.187641 2936 topology_manager.go:215] "Topology Admit Handler" podUID="e321b598f94851ca42aa4697fedd3009" podNamespace="kube-system" podName="kube-apiserver-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.188189 kubelet[2936]: I0115 13:23:51.187758 2936 topology_manager.go:215] "Topology Admit Handler" podUID="ef88e8ff83d2befbc440afcd6867f1b3" podNamespace="kube-system" podName="kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.188189 kubelet[2936]: I0115 13:23:51.187819 2936 topology_manager.go:215] "Topology Admit Handler" podUID="7c004ba3d8a0f3348d9c529778084296" podNamespace="kube-system" podName="kube-scheduler-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.200830 kubelet[2936]: W0115 13:23:51.200366 2936 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 13:23:51.200830 kubelet[2936]: W0115 13:23:51.200565 2936 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 13:23:51.204356 kubelet[2936]: W0115 13:23:51.204327 2936 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 13:23:51.204493 kubelet[2936]: E0115 13:23:51.204449 2936 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-yypfq.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.243783 kubelet[2936]: I0115 13:23:51.242510 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e321b598f94851ca42aa4697fedd3009-ca-certs\") pod \"kube-apiserver-srv-yypfq.gb1.brightbox.com\" (UID: \"e321b598f94851ca42aa4697fedd3009\") " pod="kube-system/kube-apiserver-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.243996 kubelet[2936]: I0115 13:23:51.243957 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e321b598f94851ca42aa4697fedd3009-k8s-certs\") pod \"kube-apiserver-srv-yypfq.gb1.brightbox.com\" (UID: \"e321b598f94851ca42aa4697fedd3009\") " pod="kube-system/kube-apiserver-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.244070 kubelet[2936]: I0115 13:23:51.244045 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef88e8ff83d2befbc440afcd6867f1b3-k8s-certs\") pod \"kube-controller-manager-srv-yypfq.gb1.brightbox.com\" (UID: \"ef88e8ff83d2befbc440afcd6867f1b3\") " pod="kube-system/kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.244124 kubelet[2936]: I0115 13:23:51.244090 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e321b598f94851ca42aa4697fedd3009-usr-share-ca-certificates\") pod \"kube-apiserver-srv-yypfq.gb1.brightbox.com\" (UID: \"e321b598f94851ca42aa4697fedd3009\") " pod="kube-system/kube-apiserver-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.244124 kubelet[2936]: I0115 13:23:51.244124 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef88e8ff83d2befbc440afcd6867f1b3-ca-certs\") pod \"kube-controller-manager-srv-yypfq.gb1.brightbox.com\" (UID: \"ef88e8ff83d2befbc440afcd6867f1b3\") " pod="kube-system/kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.244265 kubelet[2936]: I0115 13:23:51.244168 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ef88e8ff83d2befbc440afcd6867f1b3-flexvolume-dir\") pod \"kube-controller-manager-srv-yypfq.gb1.brightbox.com\" (UID: \"ef88e8ff83d2befbc440afcd6867f1b3\") " pod="kube-system/kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.244265 kubelet[2936]: I0115 13:23:51.244213 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef88e8ff83d2befbc440afcd6867f1b3-kubeconfig\") pod \"kube-controller-manager-srv-yypfq.gb1.brightbox.com\" (UID: \"ef88e8ff83d2befbc440afcd6867f1b3\") " pod="kube-system/kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.244265 kubelet[2936]: I0115 13:23:51.244261 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef88e8ff83d2befbc440afcd6867f1b3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-yypfq.gb1.brightbox.com\" (UID: \"ef88e8ff83d2befbc440afcd6867f1b3\") " pod="kube-system/kube-controller-manager-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.244455 kubelet[2936]: I0115 13:23:51.244306 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c004ba3d8a0f3348d9c529778084296-kubeconfig\") pod \"kube-scheduler-srv-yypfq.gb1.brightbox.com\" (UID: \"7c004ba3d8a0f3348d9c529778084296\") " pod="kube-system/kube-scheduler-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:51.794547 kubelet[2936]: I0115 13:23:51.793934 2936 apiserver.go:52] "Watching apiserver" Jan 15 13:23:51.838944 kubelet[2936]: I0115 13:23:51.837828 2936 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 15 13:23:51.972662 kubelet[2936]: W0115 13:23:51.971986 2936 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 13:23:51.972662 kubelet[2936]: E0115 13:23:51.972096 2936 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-yypfq.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-yypfq.gb1.brightbox.com" Jan 15 13:23:52.007591 kubelet[2936]: I0115 13:23:52.006565 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-yypfq.gb1.brightbox.com" podStartSLOduration=4.006477853 podStartE2EDuration="4.006477853s" podCreationTimestamp="2025-01-15 13:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:23:52.005594598 +0000 UTC m=+1.461944292" watchObservedRunningTime="2025-01-15 13:23:52.006477853 +0000 UTC m=+1.462827561" Jan 15 13:23:52.048153 kubelet[2936]: I0115 13:23:52.047929 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-yypfq.gb1.brightbox.com" podStartSLOduration=1.045980721 podStartE2EDuration="1.045980721s" podCreationTimestamp="2025-01-15 13:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:23:52.022831342 +0000 UTC m=+1.479181034" watchObservedRunningTime="2025-01-15 13:23:52.045980721 +0000 UTC m=+1.502330408" Jan 15 13:23:52.100541 kubelet[2936]: I0115 13:23:52.100482 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-yypfq.gb1.brightbox.com" podStartSLOduration=1.100408874 podStartE2EDuration="1.100408874s" podCreationTimestamp="2025-01-15 13:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:23:52.048814682 +0000 UTC m=+1.505164366" watchObservedRunningTime="2025-01-15 13:23:52.100408874 +0000 UTC m=+1.556758560" Jan 15 13:23:56.485473 sudo[1944]: pam_unix(sudo:session): session closed for user root Jan 15 13:23:56.632388 sshd[1940]: pam_unix(sshd:session): session closed for user core Jan 15 13:23:56.641104 systemd[1]: sshd@9-10.230.58.186:22-147.75.109.163:55394.service: Deactivated successfully. Jan 15 13:23:56.645531 systemd-logind[1604]: Session 11 logged out. Waiting for processes to exit. Jan 15 13:23:56.646990 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 13:23:56.649827 systemd-logind[1604]: Removed session 11. Jan 15 13:24:03.728952 kubelet[2936]: I0115 13:24:03.728876 2936 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 13:24:03.730349 kubelet[2936]: I0115 13:24:03.729940 2936 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 13:24:03.730419 containerd[1622]: time="2025-01-15T13:24:03.729636549Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 13:24:04.476192 kubelet[2936]: I0115 13:24:04.476108 2936 topology_manager.go:215] "Topology Admit Handler" podUID="b7f8d167-504e-4390-86b2-a812a982187e" podNamespace="kube-system" podName="kube-proxy-nqcc6" Jan 15 13:24:04.528636 kubelet[2936]: I0115 13:24:04.528565 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgjkj\" (UniqueName: \"kubernetes.io/projected/b7f8d167-504e-4390-86b2-a812a982187e-kube-api-access-lgjkj\") pod \"kube-proxy-nqcc6\" (UID: \"b7f8d167-504e-4390-86b2-a812a982187e\") " pod="kube-system/kube-proxy-nqcc6" Jan 15 13:24:04.528636 kubelet[2936]: I0115 13:24:04.528651 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7f8d167-504e-4390-86b2-a812a982187e-xtables-lock\") pod \"kube-proxy-nqcc6\" (UID: \"b7f8d167-504e-4390-86b2-a812a982187e\") " pod="kube-system/kube-proxy-nqcc6" Jan 15 13:24:04.528907 kubelet[2936]: I0115 13:24:04.528689 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7f8d167-504e-4390-86b2-a812a982187e-lib-modules\") pod \"kube-proxy-nqcc6\" (UID: \"b7f8d167-504e-4390-86b2-a812a982187e\") " pod="kube-system/kube-proxy-nqcc6" Jan 15 13:24:04.528907 kubelet[2936]: I0115 13:24:04.528724 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7f8d167-504e-4390-86b2-a812a982187e-kube-proxy\") pod \"kube-proxy-nqcc6\" (UID: \"b7f8d167-504e-4390-86b2-a812a982187e\") " pod="kube-system/kube-proxy-nqcc6" Jan 15 13:24:04.798716 containerd[1622]: time="2025-01-15T13:24:04.797175664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nqcc6,Uid:b7f8d167-504e-4390-86b2-a812a982187e,Namespace:kube-system,Attempt:0,}" Jan 15 13:24:04.835130 kubelet[2936]: I0115 13:24:04.834790 2936 topology_manager.go:215] "Topology Admit Handler" podUID="f73dc51b-5b31-4f32-b347-f69239da6d56" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-728m6" Jan 15 13:24:04.913186 containerd[1622]: time="2025-01-15T13:24:04.910014527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:24:04.913186 containerd[1622]: time="2025-01-15T13:24:04.910125207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:24:04.913186 containerd[1622]: time="2025-01-15T13:24:04.910149808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:04.913685 containerd[1622]: time="2025-01-15T13:24:04.913194912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:04.933216 kubelet[2936]: I0115 13:24:04.933169 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f73dc51b-5b31-4f32-b347-f69239da6d56-var-lib-calico\") pod \"tigera-operator-c7ccbd65-728m6\" (UID: \"f73dc51b-5b31-4f32-b347-f69239da6d56\") " pod="tigera-operator/tigera-operator-c7ccbd65-728m6" Jan 15 13:24:04.933336 kubelet[2936]: I0115 13:24:04.933244 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-582lq\" (UniqueName: \"kubernetes.io/projected/f73dc51b-5b31-4f32-b347-f69239da6d56-kube-api-access-582lq\") pod \"tigera-operator-c7ccbd65-728m6\" (UID: \"f73dc51b-5b31-4f32-b347-f69239da6d56\") " pod="tigera-operator/tigera-operator-c7ccbd65-728m6" Jan 15 13:24:04.981528 containerd[1622]: time="2025-01-15T13:24:04.981462597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nqcc6,Uid:b7f8d167-504e-4390-86b2-a812a982187e,Namespace:kube-system,Attempt:0,} returns sandbox id \"025e8d24451a603c42c52e189d4a8cdc0e443cfedd9bf70513fcc31961a13946\"" Jan 15 13:24:04.986658 containerd[1622]: time="2025-01-15T13:24:04.986350271Z" level=info msg="CreateContainer within sandbox \"025e8d24451a603c42c52e189d4a8cdc0e443cfedd9bf70513fcc31961a13946\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 13:24:05.011320 containerd[1622]: time="2025-01-15T13:24:05.011252660Z" level=info msg="CreateContainer within sandbox \"025e8d24451a603c42c52e189d4a8cdc0e443cfedd9bf70513fcc31961a13946\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a0ce089370b29592954ef2a11ce1ce7b6dafa9097a933af751fe601f98e38e0f\"" Jan 15 13:24:05.013021 containerd[1622]: time="2025-01-15T13:24:05.012984111Z" level=info msg="StartContainer for \"a0ce089370b29592954ef2a11ce1ce7b6dafa9097a933af751fe601f98e38e0f\"" Jan 15 13:24:05.100854 containerd[1622]: time="2025-01-15T13:24:05.100194383Z" level=info msg="StartContainer for \"a0ce089370b29592954ef2a11ce1ce7b6dafa9097a933af751fe601f98e38e0f\" returns successfully" Jan 15 13:24:05.164293 containerd[1622]: time="2025-01-15T13:24:05.164233202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-728m6,Uid:f73dc51b-5b31-4f32-b347-f69239da6d56,Namespace:tigera-operator,Attempt:0,}" Jan 15 13:24:05.221498 containerd[1622]: time="2025-01-15T13:24:05.220531892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:24:05.221498 containerd[1622]: time="2025-01-15T13:24:05.220621548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:24:05.221498 containerd[1622]: time="2025-01-15T13:24:05.220709988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:05.222217 containerd[1622]: time="2025-01-15T13:24:05.221412461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:05.318824 containerd[1622]: time="2025-01-15T13:24:05.318531986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-728m6,Uid:f73dc51b-5b31-4f32-b347-f69239da6d56,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3709cb52df414a992c884d374fde22784046c3263aec6f48a79ce571a4bbbf10\"" Jan 15 13:24:05.344005 containerd[1622]: time="2025-01-15T13:24:05.343958075Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 15 13:24:05.653129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130790823.mount: Deactivated successfully. Jan 15 13:24:10.329837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121723997.mount: Deactivated successfully. Jan 15 13:24:11.700827 containerd[1622]: time="2025-01-15T13:24:11.700675010Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:11.702113 containerd[1622]: time="2025-01-15T13:24:11.701791540Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763741" Jan 15 13:24:11.727910 containerd[1622]: time="2025-01-15T13:24:11.727834036Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:11.731933 containerd[1622]: time="2025-01-15T13:24:11.731602933Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:11.733456 containerd[1622]: time="2025-01-15T13:24:11.733010958Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 6.388995983s" Jan 15 13:24:11.733456 containerd[1622]: time="2025-01-15T13:24:11.733068378Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 15 13:24:11.746977 containerd[1622]: time="2025-01-15T13:24:11.746660476Z" level=info msg="CreateContainer within sandbox \"3709cb52df414a992c884d374fde22784046c3263aec6f48a79ce571a4bbbf10\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 13:24:11.775585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658709187.mount: Deactivated successfully. Jan 15 13:24:11.777272 containerd[1622]: time="2025-01-15T13:24:11.777221945Z" level=info msg="CreateContainer within sandbox \"3709cb52df414a992c884d374fde22784046c3263aec6f48a79ce571a4bbbf10\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a0702bc4a7fcc5dddaa40c8c274fdb5c95e8e34aab025cf3ea7dc7a16d6d5913\"" Jan 15 13:24:11.778205 containerd[1622]: time="2025-01-15T13:24:11.778173126Z" level=info msg="StartContainer for \"a0702bc4a7fcc5dddaa40c8c274fdb5c95e8e34aab025cf3ea7dc7a16d6d5913\"" Jan 15 13:24:11.826760 systemd[1]: run-containerd-runc-k8s.io-a0702bc4a7fcc5dddaa40c8c274fdb5c95e8e34aab025cf3ea7dc7a16d6d5913-runc.ye1UB3.mount: Deactivated successfully. Jan 15 13:24:11.872690 containerd[1622]: time="2025-01-15T13:24:11.872640199Z" level=info msg="StartContainer for \"a0702bc4a7fcc5dddaa40c8c274fdb5c95e8e34aab025cf3ea7dc7a16d6d5913\" returns successfully" Jan 15 13:24:12.017395 kubelet[2936]: I0115 13:24:12.015272 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nqcc6" podStartSLOduration=8.015184005 podStartE2EDuration="8.015184005s" podCreationTimestamp="2025-01-15 13:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:24:05.99825797 +0000 UTC m=+15.454607658" watchObservedRunningTime="2025-01-15 13:24:12.015184005 +0000 UTC m=+21.471533686" Jan 15 13:24:15.240953 kubelet[2936]: I0115 13:24:15.237315 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-728m6" podStartSLOduration=4.825491191 podStartE2EDuration="11.237152995s" podCreationTimestamp="2025-01-15 13:24:04 +0000 UTC" firstStartedPulling="2025-01-15 13:24:05.322090397 +0000 UTC m=+14.778440076" lastFinishedPulling="2025-01-15 13:24:11.733752202 +0000 UTC m=+21.190101880" observedRunningTime="2025-01-15 13:24:12.01794526 +0000 UTC m=+21.474294952" watchObservedRunningTime="2025-01-15 13:24:15.237152995 +0000 UTC m=+24.693502703" Jan 15 13:24:15.247666 kubelet[2936]: I0115 13:24:15.245403 2936 topology_manager.go:215] "Topology Admit Handler" podUID="aa52a59f-cfba-4fae-844f-496cd0df0177" podNamespace="calico-system" podName="calico-typha-547c9d5b5f-kqjkw" Jan 15 13:24:15.398106 kubelet[2936]: I0115 13:24:15.397527 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa52a59f-cfba-4fae-844f-496cd0df0177-tigera-ca-bundle\") pod \"calico-typha-547c9d5b5f-kqjkw\" (UID: \"aa52a59f-cfba-4fae-844f-496cd0df0177\") " pod="calico-system/calico-typha-547c9d5b5f-kqjkw" Jan 15 13:24:15.398106 kubelet[2936]: I0115 13:24:15.397620 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aa52a59f-cfba-4fae-844f-496cd0df0177-typha-certs\") pod \"calico-typha-547c9d5b5f-kqjkw\" (UID: \"aa52a59f-cfba-4fae-844f-496cd0df0177\") " pod="calico-system/calico-typha-547c9d5b5f-kqjkw" Jan 15 13:24:15.398106 kubelet[2936]: I0115 13:24:15.397813 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2252t\" (UniqueName: \"kubernetes.io/projected/aa52a59f-cfba-4fae-844f-496cd0df0177-kube-api-access-2252t\") pod \"calico-typha-547c9d5b5f-kqjkw\" (UID: \"aa52a59f-cfba-4fae-844f-496cd0df0177\") " pod="calico-system/calico-typha-547c9d5b5f-kqjkw" Jan 15 13:24:15.444285 kubelet[2936]: I0115 13:24:15.444154 2936 topology_manager.go:215] "Topology Admit Handler" podUID="637dfc55-7b18-47e0-b1c2-df8395964ed6" podNamespace="calico-system" podName="calico-node-cgc7n" Jan 15 13:24:15.567439 containerd[1622]: time="2025-01-15T13:24:15.566577140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-547c9d5b5f-kqjkw,Uid:aa52a59f-cfba-4fae-844f-496cd0df0177,Namespace:calico-system,Attempt:0,}" Jan 15 13:24:15.579524 kubelet[2936]: I0115 13:24:15.577203 2936 topology_manager.go:215] "Topology Admit Handler" podUID="bab30654-a49a-4455-9e85-a89c233ead6f" podNamespace="calico-system" podName="csi-node-driver-8z9xn" Jan 15 13:24:15.582101 kubelet[2936]: E0115 13:24:15.582046 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z9xn" podUID="bab30654-a49a-4455-9e85-a89c233ead6f" Jan 15 13:24:15.604226 kubelet[2936]: I0115 13:24:15.600581 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/637dfc55-7b18-47e0-b1c2-df8395964ed6-var-lib-calico\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.604226 kubelet[2936]: I0115 13:24:15.600655 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8rgw\" (UniqueName: \"kubernetes.io/projected/637dfc55-7b18-47e0-b1c2-df8395964ed6-kube-api-access-j8rgw\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.604226 kubelet[2936]: I0115 13:24:15.600696 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/637dfc55-7b18-47e0-b1c2-df8395964ed6-cni-net-dir\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.604226 kubelet[2936]: I0115 13:24:15.600730 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/637dfc55-7b18-47e0-b1c2-df8395964ed6-cni-bin-dir\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.604226 kubelet[2936]: I0115 13:24:15.600795 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/637dfc55-7b18-47e0-b1c2-df8395964ed6-node-certs\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.610432 kubelet[2936]: I0115 13:24:15.600876 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/637dfc55-7b18-47e0-b1c2-df8395964ed6-tigera-ca-bundle\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.610432 kubelet[2936]: I0115 13:24:15.609930 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/637dfc55-7b18-47e0-b1c2-df8395964ed6-var-run-calico\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.610432 kubelet[2936]: I0115 13:24:15.610102 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/637dfc55-7b18-47e0-b1c2-df8395964ed6-cni-log-dir\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.611127 kubelet[2936]: I0115 13:24:15.610760 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/637dfc55-7b18-47e0-b1c2-df8395964ed6-policysync\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.611241 kubelet[2936]: I0115 13:24:15.611104 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/637dfc55-7b18-47e0-b1c2-df8395964ed6-flexvol-driver-host\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.612085 kubelet[2936]: I0115 13:24:15.612004 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/637dfc55-7b18-47e0-b1c2-df8395964ed6-lib-modules\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.612085 kubelet[2936]: I0115 13:24:15.612055 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/637dfc55-7b18-47e0-b1c2-df8395964ed6-xtables-lock\") pod \"calico-node-cgc7n\" (UID: \"637dfc55-7b18-47e0-b1c2-df8395964ed6\") " pod="calico-system/calico-node-cgc7n" Jan 15 13:24:15.713813 kubelet[2936]: I0115 13:24:15.713193 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bab30654-a49a-4455-9e85-a89c233ead6f-kubelet-dir\") pod \"csi-node-driver-8z9xn\" (UID: \"bab30654-a49a-4455-9e85-a89c233ead6f\") " pod="calico-system/csi-node-driver-8z9xn" Jan 15 13:24:15.713813 kubelet[2936]: I0115 13:24:15.713337 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm6hd\" (UniqueName: \"kubernetes.io/projected/bab30654-a49a-4455-9e85-a89c233ead6f-kube-api-access-xm6hd\") pod \"csi-node-driver-8z9xn\" (UID: \"bab30654-a49a-4455-9e85-a89c233ead6f\") " pod="calico-system/csi-node-driver-8z9xn" Jan 15 13:24:15.713813 kubelet[2936]: I0115 13:24:15.713394 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bab30654-a49a-4455-9e85-a89c233ead6f-socket-dir\") pod \"csi-node-driver-8z9xn\" (UID: \"bab30654-a49a-4455-9e85-a89c233ead6f\") " pod="calico-system/csi-node-driver-8z9xn" Jan 15 13:24:15.713813 kubelet[2936]: I0115 13:24:15.713510 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bab30654-a49a-4455-9e85-a89c233ead6f-varrun\") pod \"csi-node-driver-8z9xn\" (UID: \"bab30654-a49a-4455-9e85-a89c233ead6f\") " pod="calico-system/csi-node-driver-8z9xn" Jan 15 13:24:15.713813 kubelet[2936]: I0115 13:24:15.713549 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bab30654-a49a-4455-9e85-a89c233ead6f-registration-dir\") pod \"csi-node-driver-8z9xn\" (UID: \"bab30654-a49a-4455-9e85-a89c233ead6f\") " pod="calico-system/csi-node-driver-8z9xn" Jan 15 13:24:15.739361 kubelet[2936]: E0115 13:24:15.738990 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.739361 kubelet[2936]: W0115 13:24:15.739087 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.741361 kubelet[2936]: E0115 13:24:15.739704 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.742111 containerd[1622]: time="2025-01-15T13:24:15.737377664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:24:15.742111 containerd[1622]: time="2025-01-15T13:24:15.737590746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:24:15.742111 containerd[1622]: time="2025-01-15T13:24:15.737627719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:15.742111 containerd[1622]: time="2025-01-15T13:24:15.737810489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:15.752307 kubelet[2936]: E0115 13:24:15.752277 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.752636 kubelet[2936]: W0115 13:24:15.752613 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.753935 kubelet[2936]: E0115 13:24:15.753784 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.758824 kubelet[2936]: E0115 13:24:15.758785 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.758993 kubelet[2936]: W0115 13:24:15.758970 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.759119 kubelet[2936]: E0115 13:24:15.759101 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.765288 containerd[1622]: time="2025-01-15T13:24:15.765058103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cgc7n,Uid:637dfc55-7b18-47e0-b1c2-df8395964ed6,Namespace:calico-system,Attempt:0,}" Jan 15 13:24:15.816600 kubelet[2936]: E0115 13:24:15.815240 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.816600 kubelet[2936]: W0115 13:24:15.815974 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.816600 kubelet[2936]: E0115 13:24:15.816031 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.816600 kubelet[2936]: E0115 13:24:15.816460 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.816600 kubelet[2936]: W0115 13:24:15.816474 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.816600 kubelet[2936]: E0115 13:24:15.816522 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.821001 kubelet[2936]: E0115 13:24:15.820470 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.821001 kubelet[2936]: W0115 13:24:15.820684 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.821001 kubelet[2936]: E0115 13:24:15.820714 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.823714 kubelet[2936]: E0115 13:24:15.822934 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.823714 kubelet[2936]: W0115 13:24:15.822955 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.823714 kubelet[2936]: E0115 13:24:15.822997 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.825023 kubelet[2936]: E0115 13:24:15.824421 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.825023 kubelet[2936]: W0115 13:24:15.824441 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.825023 kubelet[2936]: E0115 13:24:15.824495 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.826589 kubelet[2936]: E0115 13:24:15.826267 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.826589 kubelet[2936]: W0115 13:24:15.826288 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.826589 kubelet[2936]: E0115 13:24:15.826309 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.827115 kubelet[2936]: E0115 13:24:15.826835 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.827115 kubelet[2936]: W0115 13:24:15.827039 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.827115 kubelet[2936]: E0115 13:24:15.827061 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.827758 kubelet[2936]: E0115 13:24:15.827601 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.827758 kubelet[2936]: W0115 13:24:15.827620 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.827758 kubelet[2936]: E0115 13:24:15.827661 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.831306 kubelet[2936]: E0115 13:24:15.831274 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.831493 kubelet[2936]: W0115 13:24:15.831470 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.831704 kubelet[2936]: E0115 13:24:15.831672 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.832171 kubelet[2936]: E0115 13:24:15.832129 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.832290 kubelet[2936]: W0115 13:24:15.832272 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.832473 kubelet[2936]: E0115 13:24:15.832410 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.834148 kubelet[2936]: E0115 13:24:15.834127 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.834999 kubelet[2936]: W0115 13:24:15.834630 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.834999 kubelet[2936]: E0115 13:24:15.834663 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.835840 kubelet[2936]: E0115 13:24:15.835695 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.835840 kubelet[2936]: W0115 13:24:15.835715 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.836355 kubelet[2936]: E0115 13:24:15.836220 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.836355 kubelet[2936]: W0115 13:24:15.836239 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.836952 kubelet[2936]: E0115 13:24:15.836842 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.836952 kubelet[2936]: W0115 13:24:15.836876 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.837560 kubelet[2936]: E0115 13:24:15.837386 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.837560 kubelet[2936]: W0115 13:24:15.837403 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.838223 kubelet[2936]: E0115 13:24:15.838021 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.838223 kubelet[2936]: W0115 13:24:15.838149 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.838223 kubelet[2936]: E0115 13:24:15.838177 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.838595 kubelet[2936]: E0115 13:24:15.838451 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.838595 kubelet[2936]: E0115 13:24:15.838487 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.839382 kubelet[2936]: E0115 13:24:15.839345 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.839564 kubelet[2936]: W0115 13:24:15.839363 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.839564 kubelet[2936]: E0115 13:24:15.839502 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.840117 kubelet[2936]: E0115 13:24:15.839996 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.840117 kubelet[2936]: E0115 13:24:15.840067 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.842466 kubelet[2936]: E0115 13:24:15.840019 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.842466 kubelet[2936]: W0115 13:24:15.842230 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.842466 kubelet[2936]: E0115 13:24:15.842265 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.843383 kubelet[2936]: E0115 13:24:15.843364 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.843532 kubelet[2936]: W0115 13:24:15.843512 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.843708 kubelet[2936]: E0115 13:24:15.843678 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.844271 kubelet[2936]: E0115 13:24:15.844099 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.844271 kubelet[2936]: W0115 13:24:15.844117 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.844436 kubelet[2936]: E0115 13:24:15.844416 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.844714 kubelet[2936]: E0115 13:24:15.844614 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.844714 kubelet[2936]: W0115 13:24:15.844630 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.844714 kubelet[2936]: E0115 13:24:15.844658 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.845476 kubelet[2936]: E0115 13:24:15.845319 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.845476 kubelet[2936]: W0115 13:24:15.845336 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.845476 kubelet[2936]: E0115 13:24:15.845375 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.845920 kubelet[2936]: E0115 13:24:15.845892 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.846474 kubelet[2936]: W0115 13:24:15.846018 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.846474 kubelet[2936]: E0115 13:24:15.846047 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.855346 kubelet[2936]: E0115 13:24:15.855308 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.855346 kubelet[2936]: W0115 13:24:15.855340 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.855532 kubelet[2936]: E0115 13:24:15.855371 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.857478 kubelet[2936]: E0115 13:24:15.857322 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.857478 kubelet[2936]: W0115 13:24:15.857347 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.857478 kubelet[2936]: E0115 13:24:15.857368 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.867140 kubelet[2936]: E0115 13:24:15.867113 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 13:24:15.867694 kubelet[2936]: W0115 13:24:15.867363 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 13:24:15.867694 kubelet[2936]: E0115 13:24:15.867410 2936 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 13:24:15.894068 containerd[1622]: time="2025-01-15T13:24:15.892371236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:24:15.894068 containerd[1622]: time="2025-01-15T13:24:15.892544319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:24:15.894068 containerd[1622]: time="2025-01-15T13:24:15.892666205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:15.901840 containerd[1622]: time="2025-01-15T13:24:15.899996658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:16.037914 containerd[1622]: time="2025-01-15T13:24:16.037590494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cgc7n,Uid:637dfc55-7b18-47e0-b1c2-df8395964ed6,Namespace:calico-system,Attempt:0,} returns sandbox id \"f06b252dd16e32288821da4f8c724f648760c136b1d1f0e9e186996e30c52be7\"" Jan 15 13:24:16.046305 containerd[1622]: time="2025-01-15T13:24:16.044803303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 15 13:24:16.058542 containerd[1622]: time="2025-01-15T13:24:16.058245208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-547c9d5b5f-kqjkw,Uid:aa52a59f-cfba-4fae-844f-496cd0df0177,Namespace:calico-system,Attempt:0,} returns sandbox id \"263678b8303314e06d0ff937e8e6aebe1d130a6c3063bef7472cd79ba35d3cc5\"" Jan 15 13:24:17.730396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968008195.mount: Deactivated successfully. Jan 15 13:24:17.878572 containerd[1622]: time="2025-01-15T13:24:17.878233918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:17.880054 containerd[1622]: time="2025-01-15T13:24:17.879982462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 15 13:24:17.880841 containerd[1622]: time="2025-01-15T13:24:17.880769019Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:17.883831 containerd[1622]: time="2025-01-15T13:24:17.883795856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:17.885827 containerd[1622]: time="2025-01-15T13:24:17.885173445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.839158374s" Jan 15 13:24:17.885827 containerd[1622]: time="2025-01-15T13:24:17.885222003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 15 13:24:17.885958 kubelet[2936]: E0115 13:24:17.885339 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z9xn" podUID="bab30654-a49a-4455-9e85-a89c233ead6f" Jan 15 13:24:17.888097 containerd[1622]: time="2025-01-15T13:24:17.887939540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 15 13:24:17.889283 containerd[1622]: time="2025-01-15T13:24:17.889235165Z" level=info msg="CreateContainer within sandbox \"f06b252dd16e32288821da4f8c724f648760c136b1d1f0e9e186996e30c52be7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 13:24:17.911485 containerd[1622]: time="2025-01-15T13:24:17.911410201Z" level=info msg="CreateContainer within sandbox \"f06b252dd16e32288821da4f8c724f648760c136b1d1f0e9e186996e30c52be7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b26940743befb9d285d429ce9d035e13e7da8b8ec3998515fa0625a763d9c0e2\"" Jan 15 13:24:17.913413 containerd[1622]: time="2025-01-15T13:24:17.912559491Z" level=info msg="StartContainer for \"b26940743befb9d285d429ce9d035e13e7da8b8ec3998515fa0625a763d9c0e2\"" Jan 15 13:24:18.167307 containerd[1622]: time="2025-01-15T13:24:18.166855301Z" level=info msg="StartContainer for \"b26940743befb9d285d429ce9d035e13e7da8b8ec3998515fa0625a763d9c0e2\" returns successfully" Jan 15 13:24:18.239738 containerd[1622]: time="2025-01-15T13:24:18.221756839Z" level=info msg="shim disconnected" id=b26940743befb9d285d429ce9d035e13e7da8b8ec3998515fa0625a763d9c0e2 namespace=k8s.io Jan 15 13:24:18.240140 containerd[1622]: time="2025-01-15T13:24:18.240074192Z" level=warning msg="cleaning up after shim disconnected" id=b26940743befb9d285d429ce9d035e13e7da8b8ec3998515fa0625a763d9c0e2 namespace=k8s.io Jan 15 13:24:18.240382 containerd[1622]: time="2025-01-15T13:24:18.240246932Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 13:24:18.674789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b26940743befb9d285d429ce9d035e13e7da8b8ec3998515fa0625a763d9c0e2-rootfs.mount: Deactivated successfully. Jan 15 13:24:19.883777 kubelet[2936]: E0115 13:24:19.883302 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z9xn" podUID="bab30654-a49a-4455-9e85-a89c233ead6f" Jan 15 13:24:21.240296 containerd[1622]: time="2025-01-15T13:24:21.240226274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:21.242708 containerd[1622]: time="2025-01-15T13:24:21.241847014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 15 13:24:21.244274 containerd[1622]: time="2025-01-15T13:24:21.244217516Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:21.250068 containerd[1622]: time="2025-01-15T13:24:21.250017303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:21.253573 containerd[1622]: time="2025-01-15T13:24:21.253539123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.365554546s" Jan 15 13:24:21.253793 containerd[1622]: time="2025-01-15T13:24:21.253759085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 15 13:24:21.256910 containerd[1622]: time="2025-01-15T13:24:21.255948012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 15 13:24:21.289502 containerd[1622]: time="2025-01-15T13:24:21.289427652Z" level=info msg="CreateContainer within sandbox \"263678b8303314e06d0ff937e8e6aebe1d130a6c3063bef7472cd79ba35d3cc5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 13:24:21.329173 containerd[1622]: time="2025-01-15T13:24:21.329111664Z" level=info msg="CreateContainer within sandbox \"263678b8303314e06d0ff937e8e6aebe1d130a6c3063bef7472cd79ba35d3cc5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cabe375661d3e2629b153f88c04e1b86b34f5f71dac5e52ef2325320f0cf2fb5\"" Jan 15 13:24:21.331902 containerd[1622]: time="2025-01-15T13:24:21.330342703Z" level=info msg="StartContainer for \"cabe375661d3e2629b153f88c04e1b86b34f5f71dac5e52ef2325320f0cf2fb5\"" Jan 15 13:24:21.472114 containerd[1622]: time="2025-01-15T13:24:21.472061430Z" level=info msg="StartContainer for \"cabe375661d3e2629b153f88c04e1b86b34f5f71dac5e52ef2325320f0cf2fb5\" returns successfully" Jan 15 13:24:21.883807 kubelet[2936]: E0115 13:24:21.883729 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z9xn" podUID="bab30654-a49a-4455-9e85-a89c233ead6f" Jan 15 13:24:22.049946 kubelet[2936]: I0115 13:24:22.048585 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-547c9d5b5f-kqjkw" podStartSLOduration=1.856776499 podStartE2EDuration="7.048534935s" podCreationTimestamp="2025-01-15 13:24:15 +0000 UTC" firstStartedPulling="2025-01-15 13:24:16.06269487 +0000 UTC m=+25.519044546" lastFinishedPulling="2025-01-15 13:24:21.254453291 +0000 UTC m=+30.710802982" observedRunningTime="2025-01-15 13:24:22.046632091 +0000 UTC m=+31.502981787" watchObservedRunningTime="2025-01-15 13:24:22.048534935 +0000 UTC m=+31.504884629" Jan 15 13:24:23.042871 kubelet[2936]: I0115 13:24:23.042810 2936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 13:24:23.884066 kubelet[2936]: E0115 13:24:23.883996 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z9xn" podUID="bab30654-a49a-4455-9e85-a89c233ead6f" Jan 15 13:24:24.093968 kubelet[2936]: I0115 13:24:24.092236 2936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 13:24:25.884685 kubelet[2936]: E0115 13:24:25.884623 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z9xn" podUID="bab30654-a49a-4455-9e85-a89c233ead6f" Jan 15 13:24:27.560605 containerd[1622]: time="2025-01-15T13:24:27.560549907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:27.562194 containerd[1622]: time="2025-01-15T13:24:27.562151387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 15 13:24:27.563235 containerd[1622]: time="2025-01-15T13:24:27.563174787Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:27.566702 containerd[1622]: time="2025-01-15T13:24:27.566299236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:27.568304 containerd[1622]: time="2025-01-15T13:24:27.568272342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.312206014s" Jan 15 13:24:27.568518 containerd[1622]: time="2025-01-15T13:24:27.568457184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 15 13:24:27.573558 containerd[1622]: time="2025-01-15T13:24:27.573344672Z" level=info msg="CreateContainer within sandbox \"f06b252dd16e32288821da4f8c724f648760c136b1d1f0e9e186996e30c52be7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 13:24:27.614050 containerd[1622]: time="2025-01-15T13:24:27.613858311Z" level=info msg="CreateContainer within sandbox \"f06b252dd16e32288821da4f8c724f648760c136b1d1f0e9e186996e30c52be7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e480e851e63882ed85dd3ff3e8fda79f9ee31070e2ceee61b9bb0f39675018ed\"" Jan 15 13:24:27.617253 containerd[1622]: time="2025-01-15T13:24:27.616188609Z" level=info msg="StartContainer for \"e480e851e63882ed85dd3ff3e8fda79f9ee31070e2ceee61b9bb0f39675018ed\"" Jan 15 13:24:27.687191 systemd[1]: run-containerd-runc-k8s.io-e480e851e63882ed85dd3ff3e8fda79f9ee31070e2ceee61b9bb0f39675018ed-runc.HOXhUh.mount: Deactivated successfully. Jan 15 13:24:27.745418 containerd[1622]: time="2025-01-15T13:24:27.745366104Z" level=info msg="StartContainer for \"e480e851e63882ed85dd3ff3e8fda79f9ee31070e2ceee61b9bb0f39675018ed\" returns successfully" Jan 15 13:24:27.884078 kubelet[2936]: E0115 13:24:27.883919 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z9xn" podUID="bab30654-a49a-4455-9e85-a89c233ead6f" Jan 15 13:24:28.818774 kubelet[2936]: I0115 13:24:28.816570 2936 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 15 13:24:28.854992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e480e851e63882ed85dd3ff3e8fda79f9ee31070e2ceee61b9bb0f39675018ed-rootfs.mount: Deactivated successfully. Jan 15 13:24:28.857377 containerd[1622]: time="2025-01-15T13:24:28.857273452Z" level=info msg="shim disconnected" id=e480e851e63882ed85dd3ff3e8fda79f9ee31070e2ceee61b9bb0f39675018ed namespace=k8s.io Jan 15 13:24:28.857377 containerd[1622]: time="2025-01-15T13:24:28.857369293Z" level=warning msg="cleaning up after shim disconnected" id=e480e851e63882ed85dd3ff3e8fda79f9ee31070e2ceee61b9bb0f39675018ed namespace=k8s.io Jan 15 13:24:28.858269 containerd[1622]: time="2025-01-15T13:24:28.857386244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 13:24:28.872503 kubelet[2936]: I0115 13:24:28.865081 2936 topology_manager.go:215] "Topology Admit Handler" podUID="b1347138-b178-46fe-b1c2-417fdd2c6573" podNamespace="kube-system" podName="coredns-76f75df574-n82fg" Jan 15 13:24:28.874666 kubelet[2936]: I0115 13:24:28.873607 2936 topology_manager.go:215] "Topology Admit Handler" podUID="5e38af2f-9a3e-4164-8caa-1169f5058aa2" podNamespace="kube-system" podName="coredns-76f75df574-v2jgb" Jan 15 13:24:28.875489 kubelet[2936]: I0115 13:24:28.874965 2936 topology_manager.go:215] "Topology Admit Handler" podUID="396b8a00-1f57-47d3-b1f7-4b055b17ec02" podNamespace="calico-system" podName="calico-kube-controllers-5bd67cfc8-vd74p" Jan 15 13:24:28.907730 kubelet[2936]: I0115 13:24:28.904715 2936 topology_manager.go:215] "Topology Admit Handler" podUID="743a8cfe-c26f-44f9-9a91-9c52598564bc" podNamespace="calico-apiserver" podName="calico-apiserver-684c6c87d-dqvv4" Jan 15 13:24:28.907730 kubelet[2936]: I0115 13:24:28.905003 2936 topology_manager.go:215] "Topology Admit Handler" podUID="2b926c45-7bff-4737-b734-8ce12b4b6c1f" podNamespace="calico-apiserver" podName="calico-apiserver-684c6c87d-x4jkn" Jan 15 13:24:29.053050 kubelet[2936]: I0115 13:24:29.052980 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1347138-b178-46fe-b1c2-417fdd2c6573-config-volume\") pod \"coredns-76f75df574-n82fg\" (UID: \"b1347138-b178-46fe-b1c2-417fdd2c6573\") " pod="kube-system/coredns-76f75df574-n82fg" Jan 15 13:24:29.053275 kubelet[2936]: I0115 13:24:29.053066 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5gqd\" (UniqueName: \"kubernetes.io/projected/b1347138-b178-46fe-b1c2-417fdd2c6573-kube-api-access-s5gqd\") pod \"coredns-76f75df574-n82fg\" (UID: \"b1347138-b178-46fe-b1c2-417fdd2c6573\") " pod="kube-system/coredns-76f75df574-n82fg" Jan 15 13:24:29.053275 kubelet[2936]: I0115 13:24:29.053122 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f5gk\" (UniqueName: \"kubernetes.io/projected/396b8a00-1f57-47d3-b1f7-4b055b17ec02-kube-api-access-2f5gk\") pod \"calico-kube-controllers-5bd67cfc8-vd74p\" (UID: \"396b8a00-1f57-47d3-b1f7-4b055b17ec02\") " pod="calico-system/calico-kube-controllers-5bd67cfc8-vd74p" Jan 15 13:24:29.053275 kubelet[2936]: I0115 13:24:29.053175 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e38af2f-9a3e-4164-8caa-1169f5058aa2-config-volume\") pod \"coredns-76f75df574-v2jgb\" (UID: \"5e38af2f-9a3e-4164-8caa-1169f5058aa2\") " pod="kube-system/coredns-76f75df574-v2jgb" Jan 15 13:24:29.053275 kubelet[2936]: I0115 13:24:29.053208 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/396b8a00-1f57-47d3-b1f7-4b055b17ec02-tigera-ca-bundle\") pod \"calico-kube-controllers-5bd67cfc8-vd74p\" (UID: \"396b8a00-1f57-47d3-b1f7-4b055b17ec02\") " pod="calico-system/calico-kube-controllers-5bd67cfc8-vd74p" Jan 15 13:24:29.053275 kubelet[2936]: I0115 13:24:29.053256 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kplj\" (UniqueName: \"kubernetes.io/projected/5e38af2f-9a3e-4164-8caa-1169f5058aa2-kube-api-access-6kplj\") pod \"coredns-76f75df574-v2jgb\" (UID: \"5e38af2f-9a3e-4164-8caa-1169f5058aa2\") " pod="kube-system/coredns-76f75df574-v2jgb" Jan 15 13:24:29.053508 kubelet[2936]: I0115 13:24:29.053290 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/743a8cfe-c26f-44f9-9a91-9c52598564bc-calico-apiserver-certs\") pod \"calico-apiserver-684c6c87d-dqvv4\" (UID: \"743a8cfe-c26f-44f9-9a91-9c52598564bc\") " pod="calico-apiserver/calico-apiserver-684c6c87d-dqvv4" Jan 15 13:24:29.053508 kubelet[2936]: I0115 13:24:29.053324 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l865p\" (UniqueName: \"kubernetes.io/projected/743a8cfe-c26f-44f9-9a91-9c52598564bc-kube-api-access-l865p\") pod \"calico-apiserver-684c6c87d-dqvv4\" (UID: \"743a8cfe-c26f-44f9-9a91-9c52598564bc\") " pod="calico-apiserver/calico-apiserver-684c6c87d-dqvv4" Jan 15 13:24:29.053508 kubelet[2936]: I0115 13:24:29.053356 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5rbv\" (UniqueName: \"kubernetes.io/projected/2b926c45-7bff-4737-b734-8ce12b4b6c1f-kube-api-access-m5rbv\") pod \"calico-apiserver-684c6c87d-x4jkn\" (UID: \"2b926c45-7bff-4737-b734-8ce12b4b6c1f\") " pod="calico-apiserver/calico-apiserver-684c6c87d-x4jkn" Jan 15 13:24:29.053508 kubelet[2936]: I0115 13:24:29.053393 2936 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b926c45-7bff-4737-b734-8ce12b4b6c1f-calico-apiserver-certs\") pod \"calico-apiserver-684c6c87d-x4jkn\" (UID: \"2b926c45-7bff-4737-b734-8ce12b4b6c1f\") " pod="calico-apiserver/calico-apiserver-684c6c87d-x4jkn" Jan 15 13:24:29.077682 containerd[1622]: time="2025-01-15T13:24:29.076059632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 15 13:24:29.228816 containerd[1622]: time="2025-01-15T13:24:29.228660490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684c6c87d-x4jkn,Uid:2b926c45-7bff-4737-b734-8ce12b4b6c1f,Namespace:calico-apiserver,Attempt:0,}" Jan 15 13:24:29.237954 containerd[1622]: time="2025-01-15T13:24:29.237539995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684c6c87d-dqvv4,Uid:743a8cfe-c26f-44f9-9a91-9c52598564bc,Namespace:calico-apiserver,Attempt:0,}" Jan 15 13:24:29.488932 containerd[1622]: time="2025-01-15T13:24:29.486401567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n82fg,Uid:b1347138-b178-46fe-b1c2-417fdd2c6573,Namespace:kube-system,Attempt:0,}" Jan 15 13:24:29.528746 containerd[1622]: time="2025-01-15T13:24:29.526991894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v2jgb,Uid:5e38af2f-9a3e-4164-8caa-1169f5058aa2,Namespace:kube-system,Attempt:0,}" Jan 15 13:24:29.528746 containerd[1622]: time="2025-01-15T13:24:29.527366819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd67cfc8-vd74p,Uid:396b8a00-1f57-47d3-b1f7-4b055b17ec02,Namespace:calico-system,Attempt:0,}" Jan 15 13:24:29.528978 containerd[1622]: time="2025-01-15T13:24:29.528896721Z" level=error msg="Failed to destroy network for sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.533056 containerd[1622]: time="2025-01-15T13:24:29.532894303Z" level=error msg="encountered an error cleaning up failed sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.534642 containerd[1622]: time="2025-01-15T13:24:29.534594775Z" level=error msg="Failed to destroy network for sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.537488 containerd[1622]: time="2025-01-15T13:24:29.537442971Z" level=error msg="encountered an error cleaning up failed sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.541210 containerd[1622]: time="2025-01-15T13:24:29.540940881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684c6c87d-dqvv4,Uid:743a8cfe-c26f-44f9-9a91-9c52598564bc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.551404 kubelet[2936]: E0115 13:24:29.551247 2936 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.551404 kubelet[2936]: E0115 13:24:29.551341 2936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-684c6c87d-dqvv4" Jan 15 13:24:29.551404 kubelet[2936]: E0115 13:24:29.551376 2936 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-684c6c87d-dqvv4" Jan 15 13:24:29.552320 kubelet[2936]: E0115 13:24:29.551468 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-684c6c87d-dqvv4_calico-apiserver(743a8cfe-c26f-44f9-9a91-9c52598564bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-684c6c87d-dqvv4_calico-apiserver(743a8cfe-c26f-44f9-9a91-9c52598564bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-684c6c87d-dqvv4" podUID="743a8cfe-c26f-44f9-9a91-9c52598564bc" Jan 15 13:24:29.552767 containerd[1622]: time="2025-01-15T13:24:29.552114486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684c6c87d-x4jkn,Uid:2b926c45-7bff-4737-b734-8ce12b4b6c1f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.553323 kubelet[2936]: E0115 13:24:29.553088 2936 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.553323 kubelet[2936]: E0115 13:24:29.553132 2936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-684c6c87d-x4jkn" Jan 15 13:24:29.553323 kubelet[2936]: E0115 13:24:29.553161 2936 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-684c6c87d-x4jkn" Jan 15 13:24:29.554298 kubelet[2936]: E0115 13:24:29.553230 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-684c6c87d-x4jkn_calico-apiserver(2b926c45-7bff-4737-b734-8ce12b4b6c1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-684c6c87d-x4jkn_calico-apiserver(2b926c45-7bff-4737-b734-8ce12b4b6c1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-684c6c87d-x4jkn" podUID="2b926c45-7bff-4737-b734-8ce12b4b6c1f" Jan 15 13:24:29.679452 containerd[1622]: time="2025-01-15T13:24:29.679376534Z" level=error msg="Failed to destroy network for sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.680325 containerd[1622]: time="2025-01-15T13:24:29.680278228Z" level=error msg="encountered an error cleaning up failed sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.680652 containerd[1622]: time="2025-01-15T13:24:29.680571261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n82fg,Uid:b1347138-b178-46fe-b1c2-417fdd2c6573,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.681139 kubelet[2936]: E0115 13:24:29.681102 2936 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.681227 kubelet[2936]: E0115 13:24:29.681182 2936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-n82fg" Jan 15 13:24:29.681385 kubelet[2936]: E0115 13:24:29.681218 2936 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-n82fg" Jan 15 13:24:29.681385 kubelet[2936]: E0115 13:24:29.681325 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-n82fg_kube-system(b1347138-b178-46fe-b1c2-417fdd2c6573)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-n82fg_kube-system(b1347138-b178-46fe-b1c2-417fdd2c6573)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-n82fg" podUID="b1347138-b178-46fe-b1c2-417fdd2c6573" Jan 15 13:24:29.715229 containerd[1622]: time="2025-01-15T13:24:29.715056649Z" level=error msg="Failed to destroy network for sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.715924 containerd[1622]: time="2025-01-15T13:24:29.715859946Z" level=error msg="encountered an error cleaning up failed sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.716131 containerd[1622]: time="2025-01-15T13:24:29.715956495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v2jgb,Uid:5e38af2f-9a3e-4164-8caa-1169f5058aa2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.716501 kubelet[2936]: E0115 13:24:29.716363 2936 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.716501 kubelet[2936]: E0115 13:24:29.716473 2936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v2jgb" Jan 15 13:24:29.718839 kubelet[2936]: E0115 13:24:29.716510 2936 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v2jgb" Jan 15 13:24:29.718839 kubelet[2936]: E0115 13:24:29.716584 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-v2jgb_kube-system(5e38af2f-9a3e-4164-8caa-1169f5058aa2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-v2jgb_kube-system(5e38af2f-9a3e-4164-8caa-1169f5058aa2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-v2jgb" podUID="5e38af2f-9a3e-4164-8caa-1169f5058aa2" Jan 15 13:24:29.719177 containerd[1622]: time="2025-01-15T13:24:29.719065734Z" level=error msg="Failed to destroy network for sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.719638 containerd[1622]: time="2025-01-15T13:24:29.719580531Z" level=error msg="encountered an error cleaning up failed sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.719755 containerd[1622]: time="2025-01-15T13:24:29.719680584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd67cfc8-vd74p,Uid:396b8a00-1f57-47d3-b1f7-4b055b17ec02,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.720361 kubelet[2936]: E0115 13:24:29.719977 2936 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.720361 kubelet[2936]: E0115 13:24:29.720023 2936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd67cfc8-vd74p" Jan 15 13:24:29.720361 kubelet[2936]: E0115 13:24:29.720066 2936 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd67cfc8-vd74p" Jan 15 13:24:29.720586 kubelet[2936]: E0115 13:24:29.720135 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bd67cfc8-vd74p_calico-system(396b8a00-1f57-47d3-b1f7-4b055b17ec02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bd67cfc8-vd74p_calico-system(396b8a00-1f57-47d3-b1f7-4b055b17ec02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bd67cfc8-vd74p" podUID="396b8a00-1f57-47d3-b1f7-4b055b17ec02" Jan 15 13:24:29.893369 containerd[1622]: time="2025-01-15T13:24:29.893225814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z9xn,Uid:bab30654-a49a-4455-9e85-a89c233ead6f,Namespace:calico-system,Attempt:0,}" Jan 15 13:24:29.994644 containerd[1622]: time="2025-01-15T13:24:29.994478891Z" level=error msg="Failed to destroy network for sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.997491 containerd[1622]: time="2025-01-15T13:24:29.997410682Z" level=error msg="encountered an error cleaning up failed sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.997611 containerd[1622]: time="2025-01-15T13:24:29.997553779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z9xn,Uid:bab30654-a49a-4455-9e85-a89c233ead6f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:29.998062 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672-shm.mount: Deactivated successfully. Jan 15 13:24:30.000180 kubelet[2936]: E0115 13:24:29.998974 2936 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:30.000180 kubelet[2936]: E0115 13:24:29.999122 2936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8z9xn" Jan 15 13:24:30.000180 kubelet[2936]: E0115 13:24:29.999187 2936 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8z9xn" Jan 15 13:24:30.001100 kubelet[2936]: E0115 13:24:29.999298 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8z9xn_calico-system(bab30654-a49a-4455-9e85-a89c233ead6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8z9xn_calico-system(bab30654-a49a-4455-9e85-a89c233ead6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8z9xn" podUID="bab30654-a49a-4455-9e85-a89c233ead6f" Jan 15 13:24:30.075059 kubelet[2936]: I0115 13:24:30.074984 2936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:30.079804 kubelet[2936]: I0115 13:24:30.079635 2936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:30.082506 kubelet[2936]: I0115 13:24:30.082435 2936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:30.083468 containerd[1622]: time="2025-01-15T13:24:30.082820974Z" level=info msg="StopPodSandbox for \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\"" Jan 15 13:24:30.085101 containerd[1622]: time="2025-01-15T13:24:30.084090121Z" level=info msg="StopPodSandbox for \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\"" Jan 15 13:24:30.085101 containerd[1622]: time="2025-01-15T13:24:30.084278274Z" level=info msg="Ensure that sandbox 1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e in task-service has been cleanup successfully" Jan 15 13:24:30.086824 containerd[1622]: time="2025-01-15T13:24:30.086298865Z" level=info msg="StopPodSandbox for \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\"" Jan 15 13:24:30.086824 containerd[1622]: time="2025-01-15T13:24:30.086506831Z" level=info msg="Ensure that sandbox 6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf in task-service has been cleanup successfully" Jan 15 13:24:30.094678 containerd[1622]: time="2025-01-15T13:24:30.094626683Z" level=info msg="Ensure that sandbox 1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c in task-service has been cleanup successfully" Jan 15 13:24:30.101180 kubelet[2936]: I0115 13:24:30.099281 2936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:30.101392 containerd[1622]: time="2025-01-15T13:24:30.101349608Z" level=info msg="StopPodSandbox for \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\"" Jan 15 13:24:30.101680 containerd[1622]: time="2025-01-15T13:24:30.101548798Z" level=info msg="Ensure that sandbox 6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672 in task-service has been cleanup successfully" Jan 15 13:24:30.107219 kubelet[2936]: I0115 13:24:30.107188 2936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:30.109868 containerd[1622]: time="2025-01-15T13:24:30.109835905Z" level=info msg="StopPodSandbox for \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\"" Jan 15 13:24:30.110986 containerd[1622]: time="2025-01-15T13:24:30.110786878Z" level=info msg="Ensure that sandbox 9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da in task-service has been cleanup successfully" Jan 15 13:24:30.116789 kubelet[2936]: I0115 13:24:30.116293 2936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:30.119510 containerd[1622]: time="2025-01-15T13:24:30.119468587Z" level=info msg="StopPodSandbox for \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\"" Jan 15 13:24:30.123551 containerd[1622]: time="2025-01-15T13:24:30.123522075Z" level=info msg="Ensure that sandbox ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544 in task-service has been cleanup successfully" Jan 15 13:24:30.248262 containerd[1622]: time="2025-01-15T13:24:30.248194150Z" level=error msg="StopPodSandbox for \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\" failed" error="failed to destroy network for sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:30.249329 kubelet[2936]: E0115 13:24:30.248876 2936 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:30.249329 kubelet[2936]: E0115 13:24:30.249132 2936 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da"} Jan 15 13:24:30.249329 kubelet[2936]: E0115 13:24:30.249211 2936 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e38af2f-9a3e-4164-8caa-1169f5058aa2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:24:30.249329 kubelet[2936]: E0115 13:24:30.249286 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e38af2f-9a3e-4164-8caa-1169f5058aa2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-v2jgb" podUID="5e38af2f-9a3e-4164-8caa-1169f5058aa2" Jan 15 13:24:30.251906 containerd[1622]: time="2025-01-15T13:24:30.247601483Z" level=error msg="StopPodSandbox for \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\" failed" error="failed to destroy network for sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:30.253487 kubelet[2936]: E0115 13:24:30.253305 2936 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:30.253487 kubelet[2936]: E0115 13:24:30.253359 2936 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e"} Jan 15 13:24:30.253487 kubelet[2936]: E0115 13:24:30.253419 2936 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1347138-b178-46fe-b1c2-417fdd2c6573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:24:30.253487 kubelet[2936]: E0115 13:24:30.253457 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1347138-b178-46fe-b1c2-417fdd2c6573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-n82fg" podUID="b1347138-b178-46fe-b1c2-417fdd2c6573" Jan 15 13:24:30.256746 containerd[1622]: time="2025-01-15T13:24:30.256420299Z" level=error msg="StopPodSandbox for \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\" failed" error="failed to destroy network for sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:30.259960 kubelet[2936]: E0115 13:24:30.259176 2936 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:30.259960 kubelet[2936]: E0115 13:24:30.259221 2936 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c"} Jan 15 13:24:30.259960 kubelet[2936]: E0115 13:24:30.259271 2936 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b926c45-7bff-4737-b734-8ce12b4b6c1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:24:30.259960 kubelet[2936]: E0115 13:24:30.259322 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b926c45-7bff-4737-b734-8ce12b4b6c1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-684c6c87d-x4jkn" podUID="2b926c45-7bff-4737-b734-8ce12b4b6c1f" Jan 15 13:24:30.260557 containerd[1622]: time="2025-01-15T13:24:30.260512343Z" level=error msg="StopPodSandbox for \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\" failed" error="failed to destroy network for sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:30.261101 kubelet[2936]: E0115 13:24:30.260750 2936 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:30.261101 kubelet[2936]: E0115 13:24:30.260785 2936 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672"} Jan 15 13:24:30.261101 kubelet[2936]: E0115 13:24:30.260871 2936 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bab30654-a49a-4455-9e85-a89c233ead6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:24:30.261101 kubelet[2936]: E0115 13:24:30.260998 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bab30654-a49a-4455-9e85-a89c233ead6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8z9xn" podUID="bab30654-a49a-4455-9e85-a89c233ead6f" Jan 15 13:24:30.265706 containerd[1622]: time="2025-01-15T13:24:30.264150782Z" level=error msg="StopPodSandbox for \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\" failed" error="failed to destroy network for sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:30.265706 containerd[1622]: time="2025-01-15T13:24:30.264311066Z" level=error msg="StopPodSandbox for \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\" failed" error="failed to destroy network for sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 13:24:30.266009 kubelet[2936]: E0115 13:24:30.265156 2936 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:30.266009 kubelet[2936]: E0115 13:24:30.265191 2936 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544"} Jan 15 13:24:30.266009 kubelet[2936]: E0115 13:24:30.265238 2936 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"743a8cfe-c26f-44f9-9a91-9c52598564bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:24:30.266009 kubelet[2936]: E0115 13:24:30.265274 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"743a8cfe-c26f-44f9-9a91-9c52598564bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-684c6c87d-dqvv4" podUID="743a8cfe-c26f-44f9-9a91-9c52598564bc" Jan 15 13:24:30.266301 kubelet[2936]: E0115 13:24:30.265327 2936 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:30.266301 kubelet[2936]: E0115 13:24:30.265471 2936 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf"} Jan 15 13:24:30.266301 kubelet[2936]: E0115 13:24:30.265665 2936 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"396b8a00-1f57-47d3-b1f7-4b055b17ec02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 13:24:30.266301 kubelet[2936]: E0115 13:24:30.266016 2936 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"396b8a00-1f57-47d3-b1f7-4b055b17ec02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bd67cfc8-vd74p" podUID="396b8a00-1f57-47d3-b1f7-4b055b17ec02" Jan 15 13:24:39.014198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount621374445.mount: Deactivated successfully. Jan 15 13:24:39.115008 containerd[1622]: time="2025-01-15T13:24:39.114945720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:39.115768 containerd[1622]: time="2025-01-15T13:24:39.115695993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 15 13:24:39.147635 containerd[1622]: time="2025-01-15T13:24:39.147545063Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:39.148625 containerd[1622]: time="2025-01-15T13:24:39.148557821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:39.153377 containerd[1622]: time="2025-01-15T13:24:39.153206434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.073322626s" Jan 15 13:24:39.153377 containerd[1622]: time="2025-01-15T13:24:39.153277077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 15 13:24:39.217324 containerd[1622]: time="2025-01-15T13:24:39.217263039Z" level=info msg="CreateContainer within sandbox \"f06b252dd16e32288821da4f8c724f648760c136b1d1f0e9e186996e30c52be7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 13:24:39.276239 systemd-journald[1187]: Under memory pressure, flushing caches. Jan 15 13:24:39.267703 systemd-resolved[1514]: Under memory pressure, flushing caches. Jan 15 13:24:39.267785 systemd-resolved[1514]: Flushed all caches. Jan 15 13:24:39.283160 containerd[1622]: time="2025-01-15T13:24:39.283075778Z" level=info msg="CreateContainer within sandbox \"f06b252dd16e32288821da4f8c724f648760c136b1d1f0e9e186996e30c52be7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cffd1936edcae8bb598b07d2a0bdf171dc0429bd63222fd772867b5937e3dc26\"" Jan 15 13:24:39.288322 containerd[1622]: time="2025-01-15T13:24:39.288269921Z" level=info msg="StartContainer for \"cffd1936edcae8bb598b07d2a0bdf171dc0429bd63222fd772867b5937e3dc26\"" Jan 15 13:24:39.618281 containerd[1622]: time="2025-01-15T13:24:39.618032910Z" level=info msg="StartContainer for \"cffd1936edcae8bb598b07d2a0bdf171dc0429bd63222fd772867b5937e3dc26\" returns successfully" Jan 15 13:24:39.733618 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 13:24:39.733956 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 13:24:41.339602 systemd-journald[1187]: Under memory pressure, flushing caches. Jan 15 13:24:41.337040 systemd-resolved[1514]: Under memory pressure, flushing caches. Jan 15 13:24:41.337066 systemd-resolved[1514]: Flushed all caches. Jan 15 13:24:41.846025 kernel: bpftool[4140]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 15 13:24:42.241453 systemd-networkd[1260]: vxlan.calico: Link UP Jan 15 13:24:42.241467 systemd-networkd[1260]: vxlan.calico: Gained carrier Jan 15 13:24:42.244164 systemd[1]: run-containerd-runc-k8s.io-cffd1936edcae8bb598b07d2a0bdf171dc0429bd63222fd772867b5937e3dc26-runc.yIL3nM.mount: Deactivated successfully. Jan 15 13:24:42.886669 containerd[1622]: time="2025-01-15T13:24:42.886361001Z" level=info msg="StopPodSandbox for \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\"" Jan 15 13:24:42.888722 containerd[1622]: time="2025-01-15T13:24:42.887272688Z" level=info msg="StopPodSandbox for \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\"" Jan 15 13:24:43.029905 kubelet[2936]: I0115 13:24:43.029377 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-cgc7n" podStartSLOduration=4.8934308699999995 podStartE2EDuration="28.006301956s" podCreationTimestamp="2025-01-15 13:24:15 +0000 UTC" firstStartedPulling="2025-01-15 13:24:16.040734498 +0000 UTC m=+25.497084173" lastFinishedPulling="2025-01-15 13:24:39.153605585 +0000 UTC m=+48.609955259" observedRunningTime="2025-01-15 13:24:40.269037546 +0000 UTC m=+49.725387252" watchObservedRunningTime="2025-01-15 13:24:43.006301956 +0000 UTC m=+52.462651643" Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.001 [INFO][4274] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.003 [INFO][4274] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" iface="eth0" netns="/var/run/netns/cni-a874e139-dd66-ecda-51cb-4a7f695058f9" Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.005 [INFO][4274] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" iface="eth0" netns="/var/run/netns/cni-a874e139-dd66-ecda-51cb-4a7f695058f9" Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.007 [INFO][4274] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" iface="eth0" netns="/var/run/netns/cni-a874e139-dd66-ecda-51cb-4a7f695058f9" Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.007 [INFO][4274] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.007 [INFO][4274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.185 [INFO][4285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" HandleID="k8s-pod-network.1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.185 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.186 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.201 [WARNING][4285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" HandleID="k8s-pod-network.1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.202 [INFO][4285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" HandleID="k8s-pod-network.1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.204 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:43.215539 containerd[1622]: 2025-01-15 13:24:43.209 [INFO][4274] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:43.219288 systemd[1]: run-netns-cni\x2da874e139\x2ddd66\x2decda\x2d51cb\x2d4a7f695058f9.mount: Deactivated successfully. Jan 15 13:24:43.227225 containerd[1622]: time="2025-01-15T13:24:43.226828298Z" level=info msg="TearDown network for sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\" successfully" Jan 15 13:24:43.227225 containerd[1622]: time="2025-01-15T13:24:43.227000935Z" level=info msg="StopPodSandbox for \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\" returns successfully" Jan 15 13:24:43.228958 containerd[1622]: time="2025-01-15T13:24:43.228640141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n82fg,Uid:b1347138-b178-46fe-b1c2-417fdd2c6573,Namespace:kube-system,Attempt:1,}" Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.011 [INFO][4269] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.011 [INFO][4269] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" iface="eth0" netns="/var/run/netns/cni-453cd0fa-a0b6-e86e-1d83-f28920a33dab" Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.012 [INFO][4269] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" iface="eth0" netns="/var/run/netns/cni-453cd0fa-a0b6-e86e-1d83-f28920a33dab" Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.012 [INFO][4269] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" iface="eth0" netns="/var/run/netns/cni-453cd0fa-a0b6-e86e-1d83-f28920a33dab" Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.012 [INFO][4269] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.012 [INFO][4269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.184 [INFO][4286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" HandleID="k8s-pod-network.1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.186 [INFO][4286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.204 [INFO][4286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.218 [WARNING][4286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" HandleID="k8s-pod-network.1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.218 [INFO][4286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" HandleID="k8s-pod-network.1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.223 [INFO][4286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:43.229605 containerd[1622]: 2025-01-15 13:24:43.226 [INFO][4269] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:43.233368 containerd[1622]: time="2025-01-15T13:24:43.232968122Z" level=info msg="TearDown network for sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\" successfully" Jan 15 13:24:43.233368 containerd[1622]: time="2025-01-15T13:24:43.232998653Z" level=info msg="StopPodSandbox for \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\" returns successfully" Jan 15 13:24:43.233780 containerd[1622]: time="2025-01-15T13:24:43.233552129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684c6c87d-x4jkn,Uid:2b926c45-7bff-4737-b734-8ce12b4b6c1f,Namespace:calico-apiserver,Attempt:1,}" Jan 15 13:24:43.235854 systemd[1]: run-netns-cni\x2d453cd0fa\x2da0b6\x2de86e\x2d1d83\x2df28920a33dab.mount: Deactivated successfully. Jan 15 13:24:43.503452 systemd-networkd[1260]: cali90bff2ee732: Link UP Jan 15 13:24:43.507718 systemd-networkd[1260]: cali90bff2ee732: Gained carrier Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.350 [INFO][4300] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0 calico-apiserver-684c6c87d- calico-apiserver 2b926c45-7bff-4737-b734-8ce12b4b6c1f 757 0 2025-01-15 13:24:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:684c6c87d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-yypfq.gb1.brightbox.com calico-apiserver-684c6c87d-x4jkn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali90bff2ee732 [] []}} ContainerID="16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-x4jkn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.353 [INFO][4300] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-x4jkn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.418 [INFO][4322] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" HandleID="k8s-pod-network.16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.447 [INFO][4322] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" HandleID="k8s-pod-network.16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002907f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-yypfq.gb1.brightbox.com", "pod":"calico-apiserver-684c6c87d-x4jkn", "timestamp":"2025-01-15 13:24:43.418731742 +0000 UTC"}, Hostname:"srv-yypfq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.447 [INFO][4322] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.447 [INFO][4322] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.447 [INFO][4322] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-yypfq.gb1.brightbox.com' Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.450 [INFO][4322] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.458 [INFO][4322] ipam/ipam.go 372: Looking up existing affinities for host host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.464 [INFO][4322] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.467 [INFO][4322] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.470 [INFO][4322] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.470 [INFO][4322] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.472 [INFO][4322] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.482 [INFO][4322] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.489 [INFO][4322] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.1/26] block=192.168.40.0/26 handle="k8s-pod-network.16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.489 [INFO][4322] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.1/26] handle="k8s-pod-network.16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.489 [INFO][4322] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:43.538017 containerd[1622]: 2025-01-15 13:24:43.489 [INFO][4322] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.1/26] IPv6=[] ContainerID="16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" HandleID="k8s-pod-network.16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:43.541955 containerd[1622]: 2025-01-15 13:24:43.492 [INFO][4300] cni-plugin/k8s.go 386: Populated endpoint ContainerID="16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-x4jkn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0", GenerateName:"calico-apiserver-684c6c87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b926c45-7bff-4737-b734-8ce12b4b6c1f", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684c6c87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-684c6c87d-x4jkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90bff2ee732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:43.541955 containerd[1622]: 2025-01-15 13:24:43.493 [INFO][4300] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.1/32] ContainerID="16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-x4jkn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:43.541955 containerd[1622]: 2025-01-15 13:24:43.493 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90bff2ee732 ContainerID="16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-x4jkn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:43.541955 containerd[1622]: 2025-01-15 13:24:43.510 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-x4jkn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:43.541955 containerd[1622]: 2025-01-15 13:24:43.512 [INFO][4300] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-x4jkn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0", GenerateName:"calico-apiserver-684c6c87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b926c45-7bff-4737-b734-8ce12b4b6c1f", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684c6c87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa", Pod:"calico-apiserver-684c6c87d-x4jkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90bff2ee732", MAC:"4e:6e:62:d3:ba:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:43.541955 containerd[1622]: 2025-01-15 13:24:43.533 [INFO][4300] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-x4jkn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:43.594559 systemd-networkd[1260]: calid12ab4324f3: Link UP Jan 15 13:24:43.596266 systemd-networkd[1260]: calid12ab4324f3: Gained carrier Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.356 [INFO][4299] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0 coredns-76f75df574- kube-system b1347138-b178-46fe-b1c2-417fdd2c6573 756 0 2025-01-15 13:24:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-yypfq.gb1.brightbox.com coredns-76f75df574-n82fg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid12ab4324f3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" Namespace="kube-system" Pod="coredns-76f75df574-n82fg" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.357 [INFO][4299] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" Namespace="kube-system" Pod="coredns-76f75df574-n82fg" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.424 [INFO][4326] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" HandleID="k8s-pod-network.0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.449 [INFO][4326] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" HandleID="k8s-pod-network.0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003199b0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-yypfq.gb1.brightbox.com", "pod":"coredns-76f75df574-n82fg", "timestamp":"2025-01-15 13:24:43.423993468 +0000 UTC"}, Hostname:"srv-yypfq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.449 [INFO][4326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.489 [INFO][4326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.491 [INFO][4326] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-yypfq.gb1.brightbox.com' Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.494 [INFO][4326] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.502 [INFO][4326] ipam/ipam.go 372: Looking up existing affinities for host host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.531 [INFO][4326] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.539 [INFO][4326] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.548 [INFO][4326] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.548 [INFO][4326] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.552 [INFO][4326] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3 Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.566 [INFO][4326] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.582 [INFO][4326] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.2/26] block=192.168.40.0/26 handle="k8s-pod-network.0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.582 [INFO][4326] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.2/26] handle="k8s-pod-network.0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.583 [INFO][4326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:43.630837 containerd[1622]: 2025-01-15 13:24:43.583 [INFO][4326] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.2/26] IPv6=[] ContainerID="0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" HandleID="k8s-pod-network.0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:43.632869 containerd[1622]: 2025-01-15 13:24:43.586 [INFO][4299] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" Namespace="kube-system" Pod="coredns-76f75df574-n82fg" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b1347138-b178-46fe-b1c2-417fdd2c6573", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-n82fg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid12ab4324f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:43.632869 containerd[1622]: 2025-01-15 13:24:43.586 [INFO][4299] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.2/32] ContainerID="0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" Namespace="kube-system" Pod="coredns-76f75df574-n82fg" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:43.632869 containerd[1622]: 2025-01-15 13:24:43.586 [INFO][4299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid12ab4324f3 ContainerID="0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" Namespace="kube-system" Pod="coredns-76f75df574-n82fg" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:43.632869 containerd[1622]: 2025-01-15 13:24:43.597 [INFO][4299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" Namespace="kube-system" Pod="coredns-76f75df574-n82fg" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:43.632869 containerd[1622]: 2025-01-15 13:24:43.600 [INFO][4299] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" Namespace="kube-system" Pod="coredns-76f75df574-n82fg" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b1347138-b178-46fe-b1c2-417fdd2c6573", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3", Pod:"coredns-76f75df574-n82fg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid12ab4324f3", MAC:"6e:b3:6e:65:cd:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:43.632869 containerd[1622]: 2025-01-15 13:24:43.622 [INFO][4299] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3" Namespace="kube-system" Pod="coredns-76f75df574-n82fg" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:43.669000 containerd[1622]: time="2025-01-15T13:24:43.667139180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:24:43.669465 containerd[1622]: time="2025-01-15T13:24:43.669268455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:24:43.669465 containerd[1622]: time="2025-01-15T13:24:43.669365772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:43.670284 containerd[1622]: time="2025-01-15T13:24:43.670201395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:43.691955 containerd[1622]: time="2025-01-15T13:24:43.691657747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:24:43.691955 containerd[1622]: time="2025-01-15T13:24:43.691831066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:24:43.692445 containerd[1622]: time="2025-01-15T13:24:43.691925854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:43.693724 containerd[1622]: time="2025-01-15T13:24:43.692501297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:43.821727 containerd[1622]: time="2025-01-15T13:24:43.821680569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684c6c87d-x4jkn,Uid:2b926c45-7bff-4737-b734-8ce12b4b6c1f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa\"" Jan 15 13:24:43.826436 containerd[1622]: time="2025-01-15T13:24:43.825362037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 13:24:43.830851 containerd[1622]: time="2025-01-15T13:24:43.830819084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n82fg,Uid:b1347138-b178-46fe-b1c2-417fdd2c6573,Namespace:kube-system,Attempt:1,} returns sandbox id \"0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3\"" Jan 15 13:24:43.837257 containerd[1622]: time="2025-01-15T13:24:43.837221838Z" level=info msg="CreateContainer within sandbox \"0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 13:24:43.871661 containerd[1622]: time="2025-01-15T13:24:43.871618764Z" level=info msg="CreateContainer within sandbox \"0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d79aca69c8c8d5c1fc7926c88a980b2b526f8b9ea7cfdff69c0c8e552341c526\"" Jan 15 13:24:43.872511 containerd[1622]: time="2025-01-15T13:24:43.872479170Z" level=info msg="StartContainer for \"d79aca69c8c8d5c1fc7926c88a980b2b526f8b9ea7cfdff69c0c8e552341c526\"" Jan 15 13:24:43.891245 containerd[1622]: time="2025-01-15T13:24:43.889643179Z" level=info msg="StopPodSandbox for \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\"" Jan 15 13:24:44.051338 containerd[1622]: time="2025-01-15T13:24:44.049201364Z" level=info msg="StartContainer for \"d79aca69c8c8d5c1fc7926c88a980b2b526f8b9ea7cfdff69c0c8e552341c526\" returns successfully" Jan 15 13:24:44.067258 systemd-networkd[1260]: vxlan.calico: Gained IPv6LL Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.045 [INFO][4465] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.046 [INFO][4465] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" iface="eth0" netns="/var/run/netns/cni-2d0b432d-4c06-4767-a202-d8806ea64a5f" Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.047 [INFO][4465] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" iface="eth0" netns="/var/run/netns/cni-2d0b432d-4c06-4767-a202-d8806ea64a5f" Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.048 [INFO][4465] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" iface="eth0" netns="/var/run/netns/cni-2d0b432d-4c06-4767-a202-d8806ea64a5f" Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.048 [INFO][4465] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.048 [INFO][4465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.100 [INFO][4494] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" HandleID="k8s-pod-network.6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.101 [INFO][4494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.101 [INFO][4494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.109 [WARNING][4494] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" HandleID="k8s-pod-network.6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.109 [INFO][4494] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" HandleID="k8s-pod-network.6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.111 [INFO][4494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:44.116904 containerd[1622]: 2025-01-15 13:24:44.114 [INFO][4465] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:44.119352 containerd[1622]: time="2025-01-15T13:24:44.117673750Z" level=info msg="TearDown network for sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\" successfully" Jan 15 13:24:44.119352 containerd[1622]: time="2025-01-15T13:24:44.117711590Z" level=info msg="StopPodSandbox for \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\" returns successfully" Jan 15 13:24:44.119632 containerd[1622]: time="2025-01-15T13:24:44.119376643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z9xn,Uid:bab30654-a49a-4455-9e85-a89c233ead6f,Namespace:calico-system,Attempt:1,}" Jan 15 13:24:44.246215 systemd[1]: run-netns-cni\x2d2d0b432d\x2d4c06\x2d4767\x2da202\x2dd8806ea64a5f.mount: Deactivated successfully. Jan 15 13:24:44.253305 kubelet[2936]: I0115 13:24:44.252925 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-n82fg" podStartSLOduration=40.25186743 podStartE2EDuration="40.25186743s" podCreationTimestamp="2025-01-15 13:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:24:44.226502129 +0000 UTC m=+53.682851830" watchObservedRunningTime="2025-01-15 13:24:44.25186743 +0000 UTC m=+53.708217120" Jan 15 13:24:44.384558 systemd-networkd[1260]: cali04f8ea1ca9f: Link UP Jan 15 13:24:44.384953 systemd-networkd[1260]: cali04f8ea1ca9f: Gained carrier Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.189 [INFO][4506] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0 csi-node-driver- calico-system bab30654-a49a-4455-9e85-a89c233ead6f 769 0 2025-01-15 13:24:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-yypfq.gb1.brightbox.com csi-node-driver-8z9xn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali04f8ea1ca9f [] []}} ContainerID="088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" Namespace="calico-system" Pod="csi-node-driver-8z9xn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.190 [INFO][4506] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" Namespace="calico-system" Pod="csi-node-driver-8z9xn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.294 [INFO][4516] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" HandleID="k8s-pod-network.088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.313 [INFO][4516] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" HandleID="k8s-pod-network.088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bead0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-yypfq.gb1.brightbox.com", "pod":"csi-node-driver-8z9xn", "timestamp":"2025-01-15 13:24:44.294011151 +0000 UTC"}, Hostname:"srv-yypfq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.313 [INFO][4516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.314 [INFO][4516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.314 [INFO][4516] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-yypfq.gb1.brightbox.com' Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.317 [INFO][4516] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.325 [INFO][4516] ipam/ipam.go 372: Looking up existing affinities for host host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.335 [INFO][4516] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.343 [INFO][4516] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.347 [INFO][4516] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.348 [INFO][4516] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.351 [INFO][4516] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6 Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.361 [INFO][4516] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.368 [INFO][4516] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.3/26] block=192.168.40.0/26 handle="k8s-pod-network.088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.369 [INFO][4516] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.3/26] handle="k8s-pod-network.088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.369 [INFO][4516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:44.424275 containerd[1622]: 2025-01-15 13:24:44.369 [INFO][4516] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.3/26] IPv6=[] ContainerID="088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" HandleID="k8s-pod-network.088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:44.425526 containerd[1622]: 2025-01-15 13:24:44.372 [INFO][4506] cni-plugin/k8s.go 386: Populated endpoint ContainerID="088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" Namespace="calico-system" Pod="csi-node-driver-8z9xn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bab30654-a49a-4455-9e85-a89c233ead6f", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-8z9xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali04f8ea1ca9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:44.425526 containerd[1622]: 2025-01-15 13:24:44.373 [INFO][4506] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.3/32] ContainerID="088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" Namespace="calico-system" Pod="csi-node-driver-8z9xn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:44.425526 containerd[1622]: 2025-01-15 13:24:44.373 [INFO][4506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04f8ea1ca9f ContainerID="088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" Namespace="calico-system" Pod="csi-node-driver-8z9xn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:44.425526 containerd[1622]: 2025-01-15 13:24:44.387 [INFO][4506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" Namespace="calico-system" Pod="csi-node-driver-8z9xn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:44.425526 containerd[1622]: 2025-01-15 13:24:44.396 [INFO][4506] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" Namespace="calico-system" Pod="csi-node-driver-8z9xn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bab30654-a49a-4455-9e85-a89c233ead6f", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6", Pod:"csi-node-driver-8z9xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali04f8ea1ca9f", MAC:"3e:33:8b:23:d3:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:44.425526 containerd[1622]: 2025-01-15 13:24:44.420 [INFO][4506] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6" Namespace="calico-system" Pod="csi-node-driver-8z9xn" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:44.464667 containerd[1622]: time="2025-01-15T13:24:44.462457053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:24:44.464667 containerd[1622]: time="2025-01-15T13:24:44.464611715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:24:44.464667 containerd[1622]: time="2025-01-15T13:24:44.464633705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:44.465264 containerd[1622]: time="2025-01-15T13:24:44.464776138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:44.527311 containerd[1622]: time="2025-01-15T13:24:44.527259420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z9xn,Uid:bab30654-a49a-4455-9e85-a89c233ead6f,Namespace:calico-system,Attempt:1,} returns sandbox id \"088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6\"" Jan 15 13:24:44.579107 systemd-networkd[1260]: cali90bff2ee732: Gained IPv6LL Jan 15 13:24:44.886632 containerd[1622]: time="2025-01-15T13:24:44.886560625Z" level=info msg="StopPodSandbox for \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\"" Jan 15 13:24:44.889100 containerd[1622]: time="2025-01-15T13:24:44.889037521Z" level=info msg="StopPodSandbox for \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\"" Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:44.971 [INFO][4609] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:44.971 [INFO][4609] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" iface="eth0" netns="/var/run/netns/cni-0aa028fe-4c7c-c4cc-c657-955a21c44186" Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:44.971 [INFO][4609] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" iface="eth0" netns="/var/run/netns/cni-0aa028fe-4c7c-c4cc-c657-955a21c44186" Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:44.973 [INFO][4609] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" iface="eth0" netns="/var/run/netns/cni-0aa028fe-4c7c-c4cc-c657-955a21c44186" Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:44.973 [INFO][4609] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:44.973 [INFO][4609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:45.025 [INFO][4621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" HandleID="k8s-pod-network.9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:45.025 [INFO][4621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:45.025 [INFO][4621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:45.035 [WARNING][4621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" HandleID="k8s-pod-network.9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:45.036 [INFO][4621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" HandleID="k8s-pod-network.9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:45.037 [INFO][4621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:45.046638 containerd[1622]: 2025-01-15 13:24:45.041 [INFO][4609] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:45.050262 containerd[1622]: time="2025-01-15T13:24:45.049854913Z" level=info msg="TearDown network for sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\" successfully" Jan 15 13:24:45.050262 containerd[1622]: time="2025-01-15T13:24:45.049922992Z" level=info msg="StopPodSandbox for \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\" returns successfully" Jan 15 13:24:45.052961 containerd[1622]: time="2025-01-15T13:24:45.052566825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v2jgb,Uid:5e38af2f-9a3e-4164-8caa-1169f5058aa2,Namespace:kube-system,Attempt:1,}" Jan 15 13:24:45.055689 systemd[1]: run-netns-cni\x2d0aa028fe\x2d4c7c\x2dc4cc\x2dc657\x2d955a21c44186.mount: Deactivated successfully. Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:44.980 [INFO][4610] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:44.981 [INFO][4610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" iface="eth0" netns="/var/run/netns/cni-80c7caf9-50a0-c37d-d60c-447950b94891" Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:44.982 [INFO][4610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" iface="eth0" netns="/var/run/netns/cni-80c7caf9-50a0-c37d-d60c-447950b94891" Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:44.982 [INFO][4610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" iface="eth0" netns="/var/run/netns/cni-80c7caf9-50a0-c37d-d60c-447950b94891" Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:44.982 [INFO][4610] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:44.982 [INFO][4610] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:45.031 [INFO][4623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" HandleID="k8s-pod-network.ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:45.031 [INFO][4623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:45.037 [INFO][4623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:45.048 [WARNING][4623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" HandleID="k8s-pod-network.ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:45.048 [INFO][4623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" HandleID="k8s-pod-network.ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:45.052 [INFO][4623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:45.060206 containerd[1622]: 2025-01-15 13:24:45.058 [INFO][4610] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:45.062271 containerd[1622]: time="2025-01-15T13:24:45.060351713Z" level=info msg="TearDown network for sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\" successfully" Jan 15 13:24:45.062271 containerd[1622]: time="2025-01-15T13:24:45.060384761Z" level=info msg="StopPodSandbox for \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\" returns successfully" Jan 15 13:24:45.062271 containerd[1622]: time="2025-01-15T13:24:45.061288801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684c6c87d-dqvv4,Uid:743a8cfe-c26f-44f9-9a91-9c52598564bc,Namespace:calico-apiserver,Attempt:1,}" Jan 15 13:24:45.065229 systemd[1]: run-netns-cni\x2d80c7caf9\x2d50a0\x2dc37d\x2dd60c\x2d447950b94891.mount: Deactivated successfully. Jan 15 13:24:45.330298 systemd-networkd[1260]: cali593c7d8e79c: Link UP Jan 15 13:24:45.333328 systemd-networkd[1260]: cali593c7d8e79c: Gained carrier Jan 15 13:24:45.346330 systemd-networkd[1260]: calid12ab4324f3: Gained IPv6LL Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.147 [INFO][4633] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0 coredns-76f75df574- kube-system 5e38af2f-9a3e-4164-8caa-1169f5058aa2 783 0 2025-01-15 13:24:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-yypfq.gb1.brightbox.com coredns-76f75df574-v2jgb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali593c7d8e79c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" Namespace="kube-system" Pod="coredns-76f75df574-v2jgb" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.148 [INFO][4633] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" Namespace="kube-system" Pod="coredns-76f75df574-v2jgb" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.215 [INFO][4656] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" HandleID="k8s-pod-network.c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.270 [INFO][4656] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" HandleID="k8s-pod-network.c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293890), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-yypfq.gb1.brightbox.com", "pod":"coredns-76f75df574-v2jgb", "timestamp":"2025-01-15 13:24:45.215130541 +0000 UTC"}, Hostname:"srv-yypfq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.270 [INFO][4656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.270 [INFO][4656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.270 [INFO][4656] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-yypfq.gb1.brightbox.com' Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.275 [INFO][4656] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.283 [INFO][4656] ipam/ipam.go 372: Looking up existing affinities for host host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.292 [INFO][4656] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.294 [INFO][4656] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.300 [INFO][4656] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.300 [INFO][4656] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.303 [INFO][4656] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.308 [INFO][4656] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.315 [INFO][4656] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.4/26] block=192.168.40.0/26 handle="k8s-pod-network.c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.315 [INFO][4656] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.4/26] handle="k8s-pod-network.c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.316 [INFO][4656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:45.364149 containerd[1622]: 2025-01-15 13:24:45.316 [INFO][4656] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.4/26] IPv6=[] ContainerID="c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" HandleID="k8s-pod-network.c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:45.365814 containerd[1622]: 2025-01-15 13:24:45.320 [INFO][4633] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" Namespace="kube-system" Pod="coredns-76f75df574-v2jgb" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5e38af2f-9a3e-4164-8caa-1169f5058aa2", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-v2jgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali593c7d8e79c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:45.365814 containerd[1622]: 2025-01-15 13:24:45.320 [INFO][4633] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.4/32] ContainerID="c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" Namespace="kube-system" Pod="coredns-76f75df574-v2jgb" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:45.365814 containerd[1622]: 2025-01-15 13:24:45.321 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali593c7d8e79c ContainerID="c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" Namespace="kube-system" Pod="coredns-76f75df574-v2jgb" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:45.365814 containerd[1622]: 2025-01-15 13:24:45.331 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" Namespace="kube-system" Pod="coredns-76f75df574-v2jgb" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:45.365814 containerd[1622]: 2025-01-15 13:24:45.332 [INFO][4633] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" Namespace="kube-system" Pod="coredns-76f75df574-v2jgb" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5e38af2f-9a3e-4164-8caa-1169f5058aa2", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e", Pod:"coredns-76f75df574-v2jgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali593c7d8e79c", MAC:"ce:1a:ac:2a:2c:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:45.365814 containerd[1622]: 2025-01-15 13:24:45.353 [INFO][4633] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e" Namespace="kube-system" Pod="coredns-76f75df574-v2jgb" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:45.414607 systemd-networkd[1260]: calid05b11d451f: Link UP Jan 15 13:24:45.416031 systemd-networkd[1260]: calid05b11d451f: Gained carrier Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.166 [INFO][4643] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0 calico-apiserver-684c6c87d- calico-apiserver 743a8cfe-c26f-44f9-9a91-9c52598564bc 784 0 2025-01-15 13:24:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:684c6c87d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-yypfq.gb1.brightbox.com calico-apiserver-684c6c87d-dqvv4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid05b11d451f [] []}} ContainerID="d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-dqvv4" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.167 [INFO][4643] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-dqvv4" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.248 [INFO][4660] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" HandleID="k8s-pod-network.d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.284 [INFO][4660] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" HandleID="k8s-pod-network.d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-yypfq.gb1.brightbox.com", "pod":"calico-apiserver-684c6c87d-dqvv4", "timestamp":"2025-01-15 13:24:45.248632353 +0000 UTC"}, Hostname:"srv-yypfq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.285 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.316 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.316 [INFO][4660] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-yypfq.gb1.brightbox.com' Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.319 [INFO][4660] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.332 [INFO][4660] ipam/ipam.go 372: Looking up existing affinities for host host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.359 [INFO][4660] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.368 [INFO][4660] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.373 [INFO][4660] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.373 [INFO][4660] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.379 [INFO][4660] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.390 [INFO][4660] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.402 [INFO][4660] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.5/26] block=192.168.40.0/26 handle="k8s-pod-network.d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.403 [INFO][4660] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.5/26] handle="k8s-pod-network.d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.403 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:45.444501 containerd[1622]: 2025-01-15 13:24:45.403 [INFO][4660] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.5/26] IPv6=[] ContainerID="d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" HandleID="k8s-pod-network.d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:45.449545 containerd[1622]: 2025-01-15 13:24:45.406 [INFO][4643] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-dqvv4" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0", GenerateName:"calico-apiserver-684c6c87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"743a8cfe-c26f-44f9-9a91-9c52598564bc", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684c6c87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-684c6c87d-dqvv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid05b11d451f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:45.449545 containerd[1622]: 2025-01-15 13:24:45.406 [INFO][4643] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.5/32] ContainerID="d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-dqvv4" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:45.449545 containerd[1622]: 2025-01-15 13:24:45.406 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid05b11d451f ContainerID="d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-dqvv4" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:45.449545 containerd[1622]: 2025-01-15 13:24:45.416 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-dqvv4" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:45.449545 containerd[1622]: 2025-01-15 13:24:45.418 [INFO][4643] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-dqvv4" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0", GenerateName:"calico-apiserver-684c6c87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"743a8cfe-c26f-44f9-9a91-9c52598564bc", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684c6c87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f", Pod:"calico-apiserver-684c6c87d-dqvv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid05b11d451f", MAC:"92:dc:9e:9f:a0:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:45.449545 containerd[1622]: 2025-01-15 13:24:45.439 [INFO][4643] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f" Namespace="calico-apiserver" Pod="calico-apiserver-684c6c87d-dqvv4" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:45.504983 containerd[1622]: time="2025-01-15T13:24:45.501253694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:24:45.504983 containerd[1622]: time="2025-01-15T13:24:45.501321631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:24:45.504983 containerd[1622]: time="2025-01-15T13:24:45.501338879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:45.504983 containerd[1622]: time="2025-01-15T13:24:45.501474829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:45.538442 systemd-networkd[1260]: cali04f8ea1ca9f: Gained IPv6LL Jan 15 13:24:45.608704 containerd[1622]: time="2025-01-15T13:24:45.605213932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:24:45.616653 containerd[1622]: time="2025-01-15T13:24:45.612161747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:24:45.616653 containerd[1622]: time="2025-01-15T13:24:45.612195314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:45.616653 containerd[1622]: time="2025-01-15T13:24:45.612346847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:45.694406 containerd[1622]: time="2025-01-15T13:24:45.694261521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v2jgb,Uid:5e38af2f-9a3e-4164-8caa-1169f5058aa2,Namespace:kube-system,Attempt:1,} returns sandbox id \"c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e\"" Jan 15 13:24:45.731501 containerd[1622]: time="2025-01-15T13:24:45.731444843Z" level=info msg="CreateContainer within sandbox \"c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 13:24:45.798082 containerd[1622]: time="2025-01-15T13:24:45.797311919Z" level=info msg="CreateContainer within sandbox \"c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a68cfb440a7c66d26e8459730d4045d528bb7718f59bd8da4240145e844dd6a\"" Jan 15 13:24:45.805297 containerd[1622]: time="2025-01-15T13:24:45.800269946Z" level=info msg="StartContainer for \"3a68cfb440a7c66d26e8459730d4045d528bb7718f59bd8da4240145e844dd6a\"" Jan 15 13:24:45.883734 containerd[1622]: time="2025-01-15T13:24:45.882837785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684c6c87d-dqvv4,Uid:743a8cfe-c26f-44f9-9a91-9c52598564bc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f\"" Jan 15 13:24:45.895112 containerd[1622]: time="2025-01-15T13:24:45.895074188Z" level=info msg="StopPodSandbox for \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\"" Jan 15 13:24:46.002131 containerd[1622]: time="2025-01-15T13:24:46.001958308Z" level=info msg="StartContainer for \"3a68cfb440a7c66d26e8459730d4045d528bb7718f59bd8da4240145e844dd6a\" returns successfully" Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.097 [INFO][4818] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.099 [INFO][4818] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" iface="eth0" netns="/var/run/netns/cni-2e2edd18-42b3-1370-492d-6ea1a8dd0b90" Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.100 [INFO][4818] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" iface="eth0" netns="/var/run/netns/cni-2e2edd18-42b3-1370-492d-6ea1a8dd0b90" Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.101 [INFO][4818] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" iface="eth0" netns="/var/run/netns/cni-2e2edd18-42b3-1370-492d-6ea1a8dd0b90" Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.101 [INFO][4818] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.101 [INFO][4818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.188 [INFO][4835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" HandleID="k8s-pod-network.6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.188 [INFO][4835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.189 [INFO][4835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.205 [WARNING][4835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" HandleID="k8s-pod-network.6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.206 [INFO][4835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" HandleID="k8s-pod-network.6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.211 [INFO][4835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:46.228783 containerd[1622]: 2025-01-15 13:24:46.220 [INFO][4818] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:46.238416 containerd[1622]: time="2025-01-15T13:24:46.231301494Z" level=info msg="TearDown network for sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\" successfully" Jan 15 13:24:46.238416 containerd[1622]: time="2025-01-15T13:24:46.231340677Z" level=info msg="StopPodSandbox for \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\" returns successfully" Jan 15 13:24:46.238416 containerd[1622]: time="2025-01-15T13:24:46.236258470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd67cfc8-vd74p,Uid:396b8a00-1f57-47d3-b1f7-4b055b17ec02,Namespace:calico-system,Attempt:1,}" Jan 15 13:24:46.248071 systemd[1]: run-netns-cni\x2d2e2edd18\x2d42b3\x2d1370\x2d492d\x2d6ea1a8dd0b90.mount: Deactivated successfully. Jan 15 13:24:46.262072 kubelet[2936]: I0115 13:24:46.261262 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-v2jgb" podStartSLOduration=42.261204211 podStartE2EDuration="42.261204211s" podCreationTimestamp="2025-01-15 13:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 13:24:46.258725248 +0000 UTC m=+55.715074941" watchObservedRunningTime="2025-01-15 13:24:46.261204211 +0000 UTC m=+55.717553899" Jan 15 13:24:46.617984 systemd-networkd[1260]: calic55f4cd16c1: Link UP Jan 15 13:24:46.620603 systemd-networkd[1260]: calic55f4cd16c1: Gained carrier Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.457 [INFO][4844] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0 calico-kube-controllers-5bd67cfc8- calico-system 396b8a00-1f57-47d3-b1f7-4b055b17ec02 802 0 2025-01-15 13:24:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bd67cfc8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-yypfq.gb1.brightbox.com calico-kube-controllers-5bd67cfc8-vd74p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic55f4cd16c1 [] []}} ContainerID="1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" Namespace="calico-system" Pod="calico-kube-controllers-5bd67cfc8-vd74p" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.457 [INFO][4844] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" Namespace="calico-system" Pod="calico-kube-controllers-5bd67cfc8-vd74p" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.531 [INFO][4860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" HandleID="k8s-pod-network.1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.546 [INFO][4860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" HandleID="k8s-pod-network.1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000ff160), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-yypfq.gb1.brightbox.com", "pod":"calico-kube-controllers-5bd67cfc8-vd74p", "timestamp":"2025-01-15 13:24:46.531588709 +0000 UTC"}, Hostname:"srv-yypfq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.546 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.547 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.547 [INFO][4860] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-yypfq.gb1.brightbox.com' Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.553 [INFO][4860] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.563 [INFO][4860] ipam/ipam.go 372: Looking up existing affinities for host host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.577 [INFO][4860] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.581 [INFO][4860] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.585 [INFO][4860] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.585 [INFO][4860] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.589 [INFO][4860] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232 Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.595 [INFO][4860] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.607 [INFO][4860] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.6/26] block=192.168.40.0/26 handle="k8s-pod-network.1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.607 [INFO][4860] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.6/26] handle="k8s-pod-network.1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" host="srv-yypfq.gb1.brightbox.com" Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.607 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:46.653351 containerd[1622]: 2025-01-15 13:24:46.607 [INFO][4860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.6/26] IPv6=[] ContainerID="1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" HandleID="k8s-pod-network.1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:46.655289 containerd[1622]: 2025-01-15 13:24:46.612 [INFO][4844] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" Namespace="calico-system" Pod="calico-kube-controllers-5bd67cfc8-vd74p" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0", GenerateName:"calico-kube-controllers-5bd67cfc8-", Namespace:"calico-system", SelfLink:"", UID:"396b8a00-1f57-47d3-b1f7-4b055b17ec02", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd67cfc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-5bd67cfc8-vd74p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic55f4cd16c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:46.655289 containerd[1622]: 2025-01-15 13:24:46.612 [INFO][4844] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.6/32] ContainerID="1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" Namespace="calico-system" Pod="calico-kube-controllers-5bd67cfc8-vd74p" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:46.655289 containerd[1622]: 2025-01-15 13:24:46.612 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic55f4cd16c1 ContainerID="1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" Namespace="calico-system" Pod="calico-kube-controllers-5bd67cfc8-vd74p" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:46.655289 containerd[1622]: 2025-01-15 13:24:46.617 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" Namespace="calico-system" Pod="calico-kube-controllers-5bd67cfc8-vd74p" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:46.655289 containerd[1622]: 2025-01-15 13:24:46.623 [INFO][4844] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" Namespace="calico-system" Pod="calico-kube-controllers-5bd67cfc8-vd74p" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0", GenerateName:"calico-kube-controllers-5bd67cfc8-", Namespace:"calico-system", SelfLink:"", UID:"396b8a00-1f57-47d3-b1f7-4b055b17ec02", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd67cfc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232", Pod:"calico-kube-controllers-5bd67cfc8-vd74p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic55f4cd16c1", MAC:"5a:5d:49:f0:12:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:46.655289 containerd[1622]: 2025-01-15 13:24:46.647 [INFO][4844] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232" Namespace="calico-system" Pod="calico-kube-controllers-5bd67cfc8-vd74p" WorkloadEndpoint="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:46.690119 systemd-networkd[1260]: calid05b11d451f: Gained IPv6LL Jan 15 13:24:46.723943 containerd[1622]: time="2025-01-15T13:24:46.715158969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 13:24:46.723943 containerd[1622]: time="2025-01-15T13:24:46.715225763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 13:24:46.723943 containerd[1622]: time="2025-01-15T13:24:46.715242367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:46.723943 containerd[1622]: time="2025-01-15T13:24:46.715492206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 13:24:46.814350 containerd[1622]: time="2025-01-15T13:24:46.814280403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd67cfc8-vd74p,Uid:396b8a00-1f57-47d3-b1f7-4b055b17ec02,Namespace:calico-system,Attempt:1,} returns sandbox id \"1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232\"" Jan 15 13:24:47.138744 systemd-networkd[1260]: cali593c7d8e79c: Gained IPv6LL Jan 15 13:24:47.236995 systemd[1]: run-containerd-runc-k8s.io-1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232-runc.PaFLBU.mount: Deactivated successfully. Jan 15 13:24:47.778357 systemd-networkd[1260]: calic55f4cd16c1: Gained IPv6LL Jan 15 13:24:48.366346 containerd[1622]: time="2025-01-15T13:24:48.366247475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:48.368184 containerd[1622]: time="2025-01-15T13:24:48.368130453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 15 13:24:48.369569 containerd[1622]: time="2025-01-15T13:24:48.369433746Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:48.374716 containerd[1622]: time="2025-01-15T13:24:48.374676722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:48.376336 containerd[1622]: time="2025-01-15T13:24:48.376107610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.550673801s" Jan 15 13:24:48.376336 containerd[1622]: time="2025-01-15T13:24:48.376178389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 15 13:24:48.379588 containerd[1622]: time="2025-01-15T13:24:48.378367331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 15 13:24:48.379861 containerd[1622]: time="2025-01-15T13:24:48.379811844Z" level=info msg="CreateContainer within sandbox \"16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 13:24:48.404786 containerd[1622]: time="2025-01-15T13:24:48.404634273Z" level=info msg="CreateContainer within sandbox \"16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d5e0938c9839001be5706417d69647c157ceea52bdcc80f0aa19900e70b2a582\"" Jan 15 13:24:48.406061 containerd[1622]: time="2025-01-15T13:24:48.405578016Z" level=info msg="StartContainer for \"d5e0938c9839001be5706417d69647c157ceea52bdcc80f0aa19900e70b2a582\"" Jan 15 13:24:48.530140 containerd[1622]: time="2025-01-15T13:24:48.527035674Z" level=info msg="StartContainer for \"d5e0938c9839001be5706417d69647c157ceea52bdcc80f0aa19900e70b2a582\" returns successfully" Jan 15 13:24:49.278387 kubelet[2936]: I0115 13:24:49.278242 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-684c6c87d-x4jkn" podStartSLOduration=30.725612106 podStartE2EDuration="35.278062175s" podCreationTimestamp="2025-01-15 13:24:14 +0000 UTC" firstStartedPulling="2025-01-15 13:24:43.824296151 +0000 UTC m=+53.280645835" lastFinishedPulling="2025-01-15 13:24:48.376746213 +0000 UTC m=+57.833095904" observedRunningTime="2025-01-15 13:24:49.274844883 +0000 UTC m=+58.731194575" watchObservedRunningTime="2025-01-15 13:24:49.278062175 +0000 UTC m=+58.734411867" Jan 15 13:24:50.392380 kubelet[2936]: I0115 13:24:50.392061 2936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 13:24:50.602948 containerd[1622]: time="2025-01-15T13:24:50.602627560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:50.605159 containerd[1622]: time="2025-01-15T13:24:50.605012867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 15 13:24:50.605630 containerd[1622]: time="2025-01-15T13:24:50.605585430Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:50.611772 containerd[1622]: time="2025-01-15T13:24:50.611657583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:50.613350 containerd[1622]: time="2025-01-15T13:24:50.612909323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.234471357s" Jan 15 13:24:50.613350 containerd[1622]: time="2025-01-15T13:24:50.612973519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 15 13:24:50.614665 containerd[1622]: time="2025-01-15T13:24:50.614623552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 13:24:50.618451 containerd[1622]: time="2025-01-15T13:24:50.617667067Z" level=info msg="CreateContainer within sandbox \"088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 15 13:24:50.650026 containerd[1622]: time="2025-01-15T13:24:50.649351975Z" level=info msg="CreateContainer within sandbox \"088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fd419285cf0fa6a703626c9c2cf47534272c849533ce90deb1fdf0605de3c7cc\"" Jan 15 13:24:50.664960 containerd[1622]: time="2025-01-15T13:24:50.663073103Z" level=info msg="StartContainer for \"fd419285cf0fa6a703626c9c2cf47534272c849533ce90deb1fdf0605de3c7cc\"" Jan 15 13:24:50.770550 containerd[1622]: time="2025-01-15T13:24:50.770494863Z" level=info msg="StartContainer for \"fd419285cf0fa6a703626c9c2cf47534272c849533ce90deb1fdf0605de3c7cc\" returns successfully" Jan 15 13:24:50.868649 containerd[1622]: time="2025-01-15T13:24:50.868599640Z" level=info msg="StopPodSandbox for \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\"" Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.945 [WARNING][5022] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bab30654-a49a-4455-9e85-a89c233ead6f", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6", Pod:"csi-node-driver-8z9xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali04f8ea1ca9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.946 [INFO][5022] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.946 [INFO][5022] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" iface="eth0" netns="" Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.946 [INFO][5022] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.946 [INFO][5022] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.980 [INFO][5030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" HandleID="k8s-pod-network.6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.981 [INFO][5030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.981 [INFO][5030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.991 [WARNING][5030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" HandleID="k8s-pod-network.6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.991 [INFO][5030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" HandleID="k8s-pod-network.6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.993 [INFO][5030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:50.998177 containerd[1622]: 2025-01-15 13:24:50.995 [INFO][5022] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:50.999335 containerd[1622]: time="2025-01-15T13:24:50.998740250Z" level=info msg="TearDown network for sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\" successfully" Jan 15 13:24:50.999335 containerd[1622]: time="2025-01-15T13:24:50.998772327Z" level=info msg="StopPodSandbox for \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\" returns successfully" Jan 15 13:24:51.005532 containerd[1622]: time="2025-01-15T13:24:51.005311222Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:51.006396 containerd[1622]: time="2025-01-15T13:24:51.006326376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 15 13:24:51.006801 containerd[1622]: time="2025-01-15T13:24:51.006607634Z" level=info msg="RemovePodSandbox for \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\"" Jan 15 13:24:51.006801 containerd[1622]: time="2025-01-15T13:24:51.006660817Z" level=info msg="Forcibly stopping sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\"" Jan 15 13:24:51.011655 containerd[1622]: time="2025-01-15T13:24:51.011573537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 396.504398ms" Jan 15 13:24:51.014582 containerd[1622]: time="2025-01-15T13:24:51.011623415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 15 13:24:51.017251 containerd[1622]: time="2025-01-15T13:24:51.017217546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 15 13:24:51.022435 containerd[1622]: time="2025-01-15T13:24:51.022388521Z" level=info msg="CreateContainer within sandbox \"d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 13:24:51.042312 containerd[1622]: time="2025-01-15T13:24:51.042263672Z" level=info msg="CreateContainer within sandbox \"d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"561a299ca6967b054983001057a572b1801a061300ebf4616d521c107bfc1fac\"" Jan 15 13:24:51.044355 containerd[1622]: time="2025-01-15T13:24:51.044323066Z" level=info msg="StartContainer for \"561a299ca6967b054983001057a572b1801a061300ebf4616d521c107bfc1fac\"" Jan 15 13:24:51.195766 containerd[1622]: time="2025-01-15T13:24:51.195717763Z" level=info msg="StartContainer for \"561a299ca6967b054983001057a572b1801a061300ebf4616d521c107bfc1fac\" returns successfully" Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.155 [WARNING][5049] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bab30654-a49a-4455-9e85-a89c233ead6f", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6", Pod:"csi-node-driver-8z9xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali04f8ea1ca9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.155 [INFO][5049] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.156 [INFO][5049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" iface="eth0" netns="" Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.156 [INFO][5049] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.156 [INFO][5049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.204 [INFO][5081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" HandleID="k8s-pod-network.6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.204 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.204 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.214 [WARNING][5081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" HandleID="k8s-pod-network.6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.214 [INFO][5081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" HandleID="k8s-pod-network.6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Workload="srv--yypfq.gb1.brightbox.com-k8s-csi--node--driver--8z9xn-eth0" Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.216 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:51.222191 containerd[1622]: 2025-01-15 13:24:51.218 [INFO][5049] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672" Jan 15 13:24:51.223130 containerd[1622]: time="2025-01-15T13:24:51.223093443Z" level=info msg="TearDown network for sandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\" successfully" Jan 15 13:24:51.234378 containerd[1622]: time="2025-01-15T13:24:51.234335455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:24:51.234600 containerd[1622]: time="2025-01-15T13:24:51.234558011Z" level=info msg="RemovePodSandbox \"6bda214dbbcb05268e212afb486f9b661dbb66366b3eedab86db1ad57ec52672\" returns successfully" Jan 15 13:24:51.235470 containerd[1622]: time="2025-01-15T13:24:51.235440415Z" level=info msg="StopPodSandbox for \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\"" Jan 15 13:24:51.325165 kubelet[2936]: I0115 13:24:51.324573 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-684c6c87d-dqvv4" podStartSLOduration=32.194701782 podStartE2EDuration="37.324514552s" podCreationTimestamp="2025-01-15 13:24:14 +0000 UTC" firstStartedPulling="2025-01-15 13:24:45.885198145 +0000 UTC m=+55.341547829" lastFinishedPulling="2025-01-15 13:24:51.015010908 +0000 UTC m=+60.471360599" observedRunningTime="2025-01-15 13:24:51.324421681 +0000 UTC m=+60.780771401" watchObservedRunningTime="2025-01-15 13:24:51.324514552 +0000 UTC m=+60.780864250" Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.320 [WARNING][5109] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0", GenerateName:"calico-apiserver-684c6c87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b926c45-7bff-4737-b734-8ce12b4b6c1f", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684c6c87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa", Pod:"calico-apiserver-684c6c87d-x4jkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90bff2ee732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.321 [INFO][5109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.321 [INFO][5109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" iface="eth0" netns="" Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.321 [INFO][5109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.321 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.363 [INFO][5119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" HandleID="k8s-pod-network.1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.363 [INFO][5119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.364 [INFO][5119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.374 [WARNING][5119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" HandleID="k8s-pod-network.1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.374 [INFO][5119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" HandleID="k8s-pod-network.1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.377 [INFO][5119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:51.385178 containerd[1622]: 2025-01-15 13:24:51.378 [INFO][5109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:51.386119 containerd[1622]: time="2025-01-15T13:24:51.386077230Z" level=info msg="TearDown network for sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\" successfully" Jan 15 13:24:51.386250 containerd[1622]: time="2025-01-15T13:24:51.386222857Z" level=info msg="StopPodSandbox for \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\" returns successfully" Jan 15 13:24:51.387859 containerd[1622]: time="2025-01-15T13:24:51.387483107Z" level=info msg="RemovePodSandbox for \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\"" Jan 15 13:24:51.387859 containerd[1622]: time="2025-01-15T13:24:51.387528541Z" level=info msg="Forcibly stopping sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\"" Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.451 [WARNING][5139] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0", GenerateName:"calico-apiserver-684c6c87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b926c45-7bff-4737-b734-8ce12b4b6c1f", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684c6c87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"16e3770e91d9132a664a572a33689e7d2de02d0e22c8632db5b4d8439c6d10aa", Pod:"calico-apiserver-684c6c87d-x4jkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90bff2ee732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.453 [INFO][5139] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.453 [INFO][5139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" iface="eth0" netns="" Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.453 [INFO][5139] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.453 [INFO][5139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.516 [INFO][5145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" HandleID="k8s-pod-network.1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.516 [INFO][5145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.516 [INFO][5145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.535 [WARNING][5145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" HandleID="k8s-pod-network.1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.535 [INFO][5145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" HandleID="k8s-pod-network.1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--x4jkn-eth0" Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.538 [INFO][5145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:51.548692 containerd[1622]: 2025-01-15 13:24:51.543 [INFO][5139] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c" Jan 15 13:24:51.548692 containerd[1622]: time="2025-01-15T13:24:51.548455211Z" level=info msg="TearDown network for sandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\" successfully" Jan 15 13:24:51.557660 containerd[1622]: time="2025-01-15T13:24:51.557506253Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:24:51.557863 containerd[1622]: time="2025-01-15T13:24:51.557832496Z" level=info msg="RemovePodSandbox \"1c5d06b05ff42c58b0df3d89d188142f8bf83ee0e9279273df2976b5a09fe14c\" returns successfully" Jan 15 13:24:51.559275 containerd[1622]: time="2025-01-15T13:24:51.559233219Z" level=info msg="StopPodSandbox for \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\"" Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.682 [WARNING][5165] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0", GenerateName:"calico-apiserver-684c6c87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"743a8cfe-c26f-44f9-9a91-9c52598564bc", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684c6c87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f", Pod:"calico-apiserver-684c6c87d-dqvv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid05b11d451f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.682 [INFO][5165] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.682 [INFO][5165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" iface="eth0" netns="" Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.682 [INFO][5165] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.682 [INFO][5165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.725 [INFO][5171] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" HandleID="k8s-pod-network.ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.726 [INFO][5171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.726 [INFO][5171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.747 [WARNING][5171] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" HandleID="k8s-pod-network.ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.747 [INFO][5171] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" HandleID="k8s-pod-network.ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.751 [INFO][5171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:51.758928 containerd[1622]: 2025-01-15 13:24:51.754 [INFO][5165] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:51.758928 containerd[1622]: time="2025-01-15T13:24:51.758715257Z" level=info msg="TearDown network for sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\" successfully" Jan 15 13:24:51.758928 containerd[1622]: time="2025-01-15T13:24:51.758762969Z" level=info msg="StopPodSandbox for \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\" returns successfully" Jan 15 13:24:51.762505 containerd[1622]: time="2025-01-15T13:24:51.760398148Z" level=info msg="RemovePodSandbox for \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\"" Jan 15 13:24:51.762505 containerd[1622]: time="2025-01-15T13:24:51.760435777Z" level=info msg="Forcibly stopping sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\"" Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.838 [WARNING][5192] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0", GenerateName:"calico-apiserver-684c6c87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"743a8cfe-c26f-44f9-9a91-9c52598564bc", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684c6c87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"d26d5bf7eb5608cdd1543e6217b1966f5c05429f02961b64b23ce3341654b95f", Pod:"calico-apiserver-684c6c87d-dqvv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid05b11d451f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.840 [INFO][5192] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.841 [INFO][5192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" iface="eth0" netns="" Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.841 [INFO][5192] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.841 [INFO][5192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.874 [INFO][5199] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" HandleID="k8s-pod-network.ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.874 [INFO][5199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.874 [INFO][5199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.883 [WARNING][5199] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" HandleID="k8s-pod-network.ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.883 [INFO][5199] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" HandleID="k8s-pod-network.ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--apiserver--684c6c87d--dqvv4-eth0" Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.885 [INFO][5199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:51.888685 containerd[1622]: 2025-01-15 13:24:51.886 [INFO][5192] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544" Jan 15 13:24:51.890447 containerd[1622]: time="2025-01-15T13:24:51.889563595Z" level=info msg="TearDown network for sandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\" successfully" Jan 15 13:24:51.893802 containerd[1622]: time="2025-01-15T13:24:51.893581817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:24:51.893802 containerd[1622]: time="2025-01-15T13:24:51.893652319Z" level=info msg="RemovePodSandbox \"ea8d2d27ccfb17f0d0227d8207ed62b666a4edf0137c780b0e70222335d8f544\" returns successfully" Jan 15 13:24:51.894953 containerd[1622]: time="2025-01-15T13:24:51.894514812Z" level=info msg="StopPodSandbox for \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\"" Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.950 [WARNING][5217] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0", GenerateName:"calico-kube-controllers-5bd67cfc8-", Namespace:"calico-system", SelfLink:"", UID:"396b8a00-1f57-47d3-b1f7-4b055b17ec02", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd67cfc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232", Pod:"calico-kube-controllers-5bd67cfc8-vd74p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic55f4cd16c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.950 [INFO][5217] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.950 [INFO][5217] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" iface="eth0" netns="" Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.951 [INFO][5217] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.951 [INFO][5217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.982 [INFO][5223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" HandleID="k8s-pod-network.6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.982 [INFO][5223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.982 [INFO][5223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.991 [WARNING][5223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" HandleID="k8s-pod-network.6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.991 [INFO][5223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" HandleID="k8s-pod-network.6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.994 [INFO][5223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:51.998336 containerd[1622]: 2025-01-15 13:24:51.996 [INFO][5217] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:51.999801 containerd[1622]: time="2025-01-15T13:24:51.999144526Z" level=info msg="TearDown network for sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\" successfully" Jan 15 13:24:51.999801 containerd[1622]: time="2025-01-15T13:24:51.999238426Z" level=info msg="StopPodSandbox for \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\" returns successfully" Jan 15 13:24:52.002020 containerd[1622]: time="2025-01-15T13:24:52.000094579Z" level=info msg="RemovePodSandbox for \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\"" Jan 15 13:24:52.002020 containerd[1622]: time="2025-01-15T13:24:52.000251228Z" level=info msg="Forcibly stopping sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\"" Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.051 [WARNING][5242] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0", GenerateName:"calico-kube-controllers-5bd67cfc8-", Namespace:"calico-system", SelfLink:"", UID:"396b8a00-1f57-47d3-b1f7-4b055b17ec02", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd67cfc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232", Pod:"calico-kube-controllers-5bd67cfc8-vd74p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic55f4cd16c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.052 [INFO][5242] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.052 [INFO][5242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" iface="eth0" netns="" Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.052 [INFO][5242] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.052 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.108 [INFO][5248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" HandleID="k8s-pod-network.6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.108 [INFO][5248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.108 [INFO][5248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.121 [WARNING][5248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" HandleID="k8s-pod-network.6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.121 [INFO][5248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" HandleID="k8s-pod-network.6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Workload="srv--yypfq.gb1.brightbox.com-k8s-calico--kube--controllers--5bd67cfc8--vd74p-eth0" Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.123 [INFO][5248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:52.128059 containerd[1622]: 2025-01-15 13:24:52.125 [INFO][5242] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf" Jan 15 13:24:52.128059 containerd[1622]: time="2025-01-15T13:24:52.127994104Z" level=info msg="TearDown network for sandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\" successfully" Jan 15 13:24:52.132774 containerd[1622]: time="2025-01-15T13:24:52.132732099Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:24:52.132872 containerd[1622]: time="2025-01-15T13:24:52.132812845Z" level=info msg="RemovePodSandbox \"6a7c04c3e2510af22a5fb8db1259d4fb59e2317bd5c7980fe133def1887f6edf\" returns successfully" Jan 15 13:24:52.134287 containerd[1622]: time="2025-01-15T13:24:52.134236527Z" level=info msg="StopPodSandbox for \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\"" Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.186 [WARNING][5267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b1347138-b178-46fe-b1c2-417fdd2c6573", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3", Pod:"coredns-76f75df574-n82fg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid12ab4324f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.190 [INFO][5267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.190 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" iface="eth0" netns="" Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.190 [INFO][5267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.190 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.219 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" HandleID="k8s-pod-network.1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.219 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.219 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.228 [WARNING][5273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" HandleID="k8s-pod-network.1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.228 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" HandleID="k8s-pod-network.1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.230 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:52.234260 containerd[1622]: 2025-01-15 13:24:52.232 [INFO][5267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:52.234260 containerd[1622]: time="2025-01-15T13:24:52.234085546Z" level=info msg="TearDown network for sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\" successfully" Jan 15 13:24:52.234260 containerd[1622]: time="2025-01-15T13:24:52.234126627Z" level=info msg="StopPodSandbox for \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\" returns successfully" Jan 15 13:24:52.236340 containerd[1622]: time="2025-01-15T13:24:52.235515612Z" level=info msg="RemovePodSandbox for \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\"" Jan 15 13:24:52.236340 containerd[1622]: time="2025-01-15T13:24:52.235560050Z" level=info msg="Forcibly stopping sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\"" Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.303 [WARNING][5291] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b1347138-b178-46fe-b1c2-417fdd2c6573", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"0cd94f90fd1ffec918152a3b2f4dac6a98e29a0b5504bc715a5e3e1337ddf8f3", Pod:"coredns-76f75df574-n82fg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid12ab4324f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.304 [INFO][5291] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.305 [INFO][5291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" iface="eth0" netns="" Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.305 [INFO][5291] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.305 [INFO][5291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.336 [INFO][5297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" HandleID="k8s-pod-network.1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.337 [INFO][5297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.337 [INFO][5297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.347 [WARNING][5297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" HandleID="k8s-pod-network.1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.347 [INFO][5297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" HandleID="k8s-pod-network.1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--n82fg-eth0" Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.349 [INFO][5297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:52.353568 containerd[1622]: 2025-01-15 13:24:52.352 [INFO][5291] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e" Jan 15 13:24:52.355628 containerd[1622]: time="2025-01-15T13:24:52.353666750Z" level=info msg="TearDown network for sandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\" successfully" Jan 15 13:24:52.357715 containerd[1622]: time="2025-01-15T13:24:52.357681029Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:24:52.357790 containerd[1622]: time="2025-01-15T13:24:52.357751027Z" level=info msg="RemovePodSandbox \"1461ae42e873cf65de14a99c023af1e05be210dba0e2b55a9c07eb6778b0dd4e\" returns successfully" Jan 15 13:24:52.359243 containerd[1622]: time="2025-01-15T13:24:52.359206760Z" level=info msg="StopPodSandbox for \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\"" Jan 15 13:24:52.390621 kubelet[2936]: I0115 13:24:52.389545 2936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.435 [WARNING][5315] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5e38af2f-9a3e-4164-8caa-1169f5058aa2", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e", Pod:"coredns-76f75df574-v2jgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali593c7d8e79c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.435 [INFO][5315] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.435 [INFO][5315] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" iface="eth0" netns="" Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.435 [INFO][5315] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.435 [INFO][5315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.515 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" HandleID="k8s-pod-network.9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.516 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.516 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.531 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" HandleID="k8s-pod-network.9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.531 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" HandleID="k8s-pod-network.9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.535 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:52.543610 containerd[1622]: 2025-01-15 13:24:52.539 [INFO][5315] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:52.546037 containerd[1622]: time="2025-01-15T13:24:52.543646658Z" level=info msg="TearDown network for sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\" successfully" Jan 15 13:24:52.546037 containerd[1622]: time="2025-01-15T13:24:52.543681469Z" level=info msg="StopPodSandbox for \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\" returns successfully" Jan 15 13:24:52.546037 containerd[1622]: time="2025-01-15T13:24:52.544655658Z" level=info msg="RemovePodSandbox for \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\"" Jan 15 13:24:52.546037 containerd[1622]: time="2025-01-15T13:24:52.544699090Z" level=info msg="Forcibly stopping sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\"" Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.640 [WARNING][5342] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5e38af2f-9a3e-4164-8caa-1169f5058aa2", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 13, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-yypfq.gb1.brightbox.com", ContainerID:"c98a21b07c2f2eb819decf0be9a8d906c3bf165791d2d7392325fb06188ad12e", Pod:"coredns-76f75df574-v2jgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali593c7d8e79c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.641 [INFO][5342] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.641 [INFO][5342] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" iface="eth0" netns="" Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.641 [INFO][5342] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.641 [INFO][5342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.703 [INFO][5349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" HandleID="k8s-pod-network.9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.703 [INFO][5349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.703 [INFO][5349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.716 [WARNING][5349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" HandleID="k8s-pod-network.9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.716 [INFO][5349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" HandleID="k8s-pod-network.9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Workload="srv--yypfq.gb1.brightbox.com-k8s-coredns--76f75df574--v2jgb-eth0" Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.720 [INFO][5349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 13:24:52.726004 containerd[1622]: 2025-01-15 13:24:52.723 [INFO][5342] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da" Jan 15 13:24:52.726758 containerd[1622]: time="2025-01-15T13:24:52.726060589Z" level=info msg="TearDown network for sandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\" successfully" Jan 15 13:24:52.731589 containerd[1622]: time="2025-01-15T13:24:52.731527448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 13:24:52.731785 containerd[1622]: time="2025-01-15T13:24:52.731727110Z" level=info msg="RemovePodSandbox \"9108535a67e0a852a0fe07440ac822a31623644a8987c5d72024bb02df81e7da\" returns successfully" Jan 15 13:24:54.810656 containerd[1622]: time="2025-01-15T13:24:54.810591958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:54.812068 containerd[1622]: time="2025-01-15T13:24:54.812004015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 15 13:24:54.813216 containerd[1622]: time="2025-01-15T13:24:54.813152185Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:54.817590 containerd[1622]: time="2025-01-15T13:24:54.817544897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:54.819466 containerd[1622]: time="2025-01-15T13:24:54.819253070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.80198949s" Jan 15 13:24:54.819466 containerd[1622]: time="2025-01-15T13:24:54.819312468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 15 13:24:54.820834 containerd[1622]: time="2025-01-15T13:24:54.820597397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 15 13:24:54.863411 containerd[1622]: time="2025-01-15T13:24:54.863352198Z" level=info msg="CreateContainer within sandbox \"1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 15 13:24:54.883701 containerd[1622]: time="2025-01-15T13:24:54.882116369Z" level=info msg="CreateContainer within sandbox \"1405d482006aecadb78b306baa96bc78c7721df7fd8dbf9f0f7b94df78e72232\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"90a0bfa08556a828e6167b5d1d00727f43112979b7998b6129146925999af80e\"" Jan 15 13:24:54.886597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount871343579.mount: Deactivated successfully. Jan 15 13:24:54.892920 containerd[1622]: time="2025-01-15T13:24:54.891780001Z" level=info msg="StartContainer for \"90a0bfa08556a828e6167b5d1d00727f43112979b7998b6129146925999af80e\"" Jan 15 13:24:55.025688 containerd[1622]: time="2025-01-15T13:24:55.025627683Z" level=info msg="StartContainer for \"90a0bfa08556a828e6167b5d1d00727f43112979b7998b6129146925999af80e\" returns successfully" Jan 15 13:24:55.455959 kubelet[2936]: I0115 13:24:55.455751 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bd67cfc8-vd74p" podStartSLOduration=32.451709129 podStartE2EDuration="40.455613274s" podCreationTimestamp="2025-01-15 13:24:15 +0000 UTC" firstStartedPulling="2025-01-15 13:24:46.816115511 +0000 UTC m=+56.272465191" lastFinishedPulling="2025-01-15 13:24:54.820019652 +0000 UTC m=+64.276369336" observedRunningTime="2025-01-15 13:24:55.378518458 +0000 UTC m=+64.834868150" watchObservedRunningTime="2025-01-15 13:24:55.455613274 +0000 UTC m=+64.911962966" Jan 15 13:24:56.729222 containerd[1622]: time="2025-01-15T13:24:56.729161996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:56.730488 containerd[1622]: time="2025-01-15T13:24:56.730407037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 15 13:24:56.732045 containerd[1622]: time="2025-01-15T13:24:56.731957641Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:56.736713 containerd[1622]: time="2025-01-15T13:24:56.736564729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 13:24:56.741409 containerd[1622]: time="2025-01-15T13:24:56.738442435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.917792343s" Jan 15 13:24:56.741409 containerd[1622]: time="2025-01-15T13:24:56.738492009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 15 13:24:56.743040 containerd[1622]: time="2025-01-15T13:24:56.742776120Z" level=info msg="CreateContainer within sandbox \"088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 15 13:24:56.766575 containerd[1622]: time="2025-01-15T13:24:56.764313612Z" level=info msg="CreateContainer within sandbox \"088cc4e6ef69e5876fd5820717f54ce36ea131204132dddb661a31c060f3dcf6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a92c248c36c9dc1553641e0e7f066faaae8163e0feedf57b33a360cc9676cf12\"" Jan 15 13:24:56.767053 containerd[1622]: time="2025-01-15T13:24:56.766870133Z" level=info msg="StartContainer for \"a92c248c36c9dc1553641e0e7f066faaae8163e0feedf57b33a360cc9676cf12\"" Jan 15 13:24:56.882716 containerd[1622]: time="2025-01-15T13:24:56.882666895Z" level=info msg="StartContainer for \"a92c248c36c9dc1553641e0e7f066faaae8163e0feedf57b33a360cc9676cf12\" returns successfully" Jan 15 13:24:57.301257 kubelet[2936]: I0115 13:24:57.301205 2936 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 15 13:24:57.306235 kubelet[2936]: I0115 13:24:57.306201 2936 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 15 13:24:57.405394 kubelet[2936]: I0115 13:24:57.405345 2936 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-8z9xn" podStartSLOduration=30.195709591 podStartE2EDuration="42.405293131s" podCreationTimestamp="2025-01-15 13:24:15 +0000 UTC" firstStartedPulling="2025-01-15 13:24:44.52928449 +0000 UTC m=+53.985634174" lastFinishedPulling="2025-01-15 13:24:56.738868035 +0000 UTC m=+66.195217714" observedRunningTime="2025-01-15 13:24:57.404512432 +0000 UTC m=+66.860862138" watchObservedRunningTime="2025-01-15 13:24:57.405293131 +0000 UTC m=+66.861642813" Jan 15 13:25:01.427798 systemd[1]: Started sshd@10-10.230.58.186:22-147.75.109.163:46084.service - OpenSSH per-connection server daemon (147.75.109.163:46084). Jan 15 13:25:02.406378 sshd[5511]: Accepted publickey for core from 147.75.109.163 port 46084 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:02.409946 sshd[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:02.432749 systemd-logind[1604]: New session 12 of user core. Jan 15 13:25:02.443423 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 13:25:03.776351 sshd[5511]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:03.784158 systemd[1]: sshd@10-10.230.58.186:22-147.75.109.163:46084.service: Deactivated successfully. Jan 15 13:25:03.789184 systemd-logind[1604]: Session 12 logged out. Waiting for processes to exit. Jan 15 13:25:03.790136 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 13:25:03.792080 systemd-logind[1604]: Removed session 12. Jan 15 13:25:08.927289 systemd[1]: Started sshd@11-10.230.58.186:22-147.75.109.163:57176.service - OpenSSH per-connection server daemon (147.75.109.163:57176). Jan 15 13:25:09.830457 sshd[5537]: Accepted publickey for core from 147.75.109.163 port 57176 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:09.832821 sshd[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:09.840160 systemd-logind[1604]: New session 13 of user core. Jan 15 13:25:09.847346 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 13:25:10.556013 sshd[5537]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:10.560696 systemd[1]: sshd@11-10.230.58.186:22-147.75.109.163:57176.service: Deactivated successfully. Jan 15 13:25:10.567294 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 13:25:10.568831 systemd-logind[1604]: Session 13 logged out. Waiting for processes to exit. Jan 15 13:25:10.570376 systemd-logind[1604]: Removed session 13. Jan 15 13:25:15.709318 systemd[1]: Started sshd@12-10.230.58.186:22-147.75.109.163:57190.service - OpenSSH per-connection server daemon (147.75.109.163:57190). Jan 15 13:25:16.630386 sshd[5553]: Accepted publickey for core from 147.75.109.163 port 57190 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:16.642708 sshd[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:16.654709 systemd-logind[1604]: New session 14 of user core. Jan 15 13:25:16.662155 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 13:25:17.388229 sshd[5553]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:17.399567 systemd[1]: sshd@12-10.230.58.186:22-147.75.109.163:57190.service: Deactivated successfully. Jan 15 13:25:17.405117 systemd-logind[1604]: Session 14 logged out. Waiting for processes to exit. Jan 15 13:25:17.407354 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 13:25:17.409236 systemd-logind[1604]: Removed session 14. Jan 15 13:25:17.536250 systemd[1]: Started sshd@13-10.230.58.186:22-147.75.109.163:45236.service - OpenSSH per-connection server daemon (147.75.109.163:45236). Jan 15 13:25:18.430754 sshd[5569]: Accepted publickey for core from 147.75.109.163 port 45236 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:18.432941 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:18.441153 systemd-logind[1604]: New session 15 of user core. Jan 15 13:25:18.447414 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 13:25:19.248275 sshd[5569]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:19.255573 systemd[1]: sshd@13-10.230.58.186:22-147.75.109.163:45236.service: Deactivated successfully. Jan 15 13:25:19.260612 systemd-logind[1604]: Session 15 logged out. Waiting for processes to exit. Jan 15 13:25:19.261779 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 13:25:19.265034 systemd-logind[1604]: Removed session 15. Jan 15 13:25:19.397554 systemd[1]: Started sshd@14-10.230.58.186:22-147.75.109.163:45244.service - OpenSSH per-connection server daemon (147.75.109.163:45244). Jan 15 13:25:20.298600 sshd[5581]: Accepted publickey for core from 147.75.109.163 port 45244 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:20.301204 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:20.308690 systemd-logind[1604]: New session 16 of user core. Jan 15 13:25:20.317385 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 13:25:21.041704 sshd[5581]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:21.057717 systemd[1]: sshd@14-10.230.58.186:22-147.75.109.163:45244.service: Deactivated successfully. Jan 15 13:25:21.063043 systemd-logind[1604]: Session 16 logged out. Waiting for processes to exit. Jan 15 13:25:21.063592 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 13:25:21.066402 systemd-logind[1604]: Removed session 16. Jan 15 13:25:26.195211 systemd[1]: Started sshd@15-10.230.58.186:22-147.75.109.163:45246.service - OpenSSH per-connection server daemon (147.75.109.163:45246). Jan 15 13:25:27.100778 sshd[5601]: Accepted publickey for core from 147.75.109.163 port 45246 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:27.109588 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:27.118028 systemd-logind[1604]: New session 17 of user core. Jan 15 13:25:27.124412 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 13:25:27.874808 sshd[5601]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:27.882506 systemd[1]: sshd@15-10.230.58.186:22-147.75.109.163:45246.service: Deactivated successfully. Jan 15 13:25:27.886767 systemd-logind[1604]: Session 17 logged out. Waiting for processes to exit. Jan 15 13:25:27.887860 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 13:25:27.890087 systemd-logind[1604]: Removed session 17. Jan 15 13:25:33.031572 systemd[1]: Started sshd@16-10.230.58.186:22-147.75.109.163:60876.service - OpenSSH per-connection server daemon (147.75.109.163:60876). Jan 15 13:25:34.023201 sshd[5664]: Accepted publickey for core from 147.75.109.163 port 60876 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:34.028599 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:34.037333 systemd-logind[1604]: New session 18 of user core. Jan 15 13:25:34.045455 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 13:25:34.819540 sshd[5664]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:34.825433 systemd[1]: sshd@16-10.230.58.186:22-147.75.109.163:60876.service: Deactivated successfully. Jan 15 13:25:34.830276 systemd-logind[1604]: Session 18 logged out. Waiting for processes to exit. Jan 15 13:25:34.830668 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 13:25:34.833276 systemd-logind[1604]: Removed session 18. Jan 15 13:25:39.968449 systemd[1]: Started sshd@17-10.230.58.186:22-147.75.109.163:45160.service - OpenSSH per-connection server daemon (147.75.109.163:45160). Jan 15 13:25:40.863447 sshd[5680]: Accepted publickey for core from 147.75.109.163 port 45160 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:40.866013 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:40.873508 systemd-logind[1604]: New session 19 of user core. Jan 15 13:25:40.881401 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 13:25:41.737673 sshd[5680]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:41.749829 systemd[1]: sshd@17-10.230.58.186:22-147.75.109.163:45160.service: Deactivated successfully. Jan 15 13:25:41.759402 systemd-logind[1604]: Session 19 logged out. Waiting for processes to exit. Jan 15 13:25:41.760660 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 13:25:41.766380 systemd-logind[1604]: Removed session 19. Jan 15 13:25:46.886211 systemd[1]: Started sshd@18-10.230.58.186:22-147.75.109.163:45176.service - OpenSSH per-connection server daemon (147.75.109.163:45176). Jan 15 13:25:47.819976 sshd[5694]: Accepted publickey for core from 147.75.109.163 port 45176 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:47.822949 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:47.834119 systemd-logind[1604]: New session 20 of user core. Jan 15 13:25:47.840661 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 13:25:48.722210 sshd[5694]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:48.730219 systemd[1]: sshd@18-10.230.58.186:22-147.75.109.163:45176.service: Deactivated successfully. Jan 15 13:25:48.735473 systemd-logind[1604]: Session 20 logged out. Waiting for processes to exit. Jan 15 13:25:48.735648 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 13:25:48.741617 systemd-logind[1604]: Removed session 20. Jan 15 13:25:48.873320 systemd[1]: Started sshd@19-10.230.58.186:22-147.75.109.163:39414.service - OpenSSH per-connection server daemon (147.75.109.163:39414). Jan 15 13:25:49.776486 sshd[5707]: Accepted publickey for core from 147.75.109.163 port 39414 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:49.778683 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:49.785344 systemd-logind[1604]: New session 21 of user core. Jan 15 13:25:49.792432 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 13:25:50.813863 sshd[5707]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:50.824760 systemd-logind[1604]: Session 21 logged out. Waiting for processes to exit. Jan 15 13:25:50.825207 systemd[1]: sshd@19-10.230.58.186:22-147.75.109.163:39414.service: Deactivated successfully. Jan 15 13:25:50.829931 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 13:25:50.833152 systemd-logind[1604]: Removed session 21. Jan 15 13:25:50.963439 systemd[1]: Started sshd@20-10.230.58.186:22-147.75.109.163:39422.service - OpenSSH per-connection server daemon (147.75.109.163:39422). Jan 15 13:25:51.870811 sshd[5721]: Accepted publickey for core from 147.75.109.163 port 39422 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:51.873278 sshd[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:51.881395 systemd-logind[1604]: New session 22 of user core. Jan 15 13:25:51.888290 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 13:25:55.314257 systemd-journald[1187]: Under memory pressure, flushing caches. Jan 15 13:25:55.308334 systemd-resolved[1514]: Under memory pressure, flushing caches. Jan 15 13:25:55.308384 systemd-resolved[1514]: Flushed all caches. Jan 15 13:25:55.616843 sshd[5721]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:55.634200 systemd[1]: sshd@20-10.230.58.186:22-147.75.109.163:39422.service: Deactivated successfully. Jan 15 13:25:55.639981 systemd-logind[1604]: Session 22 logged out. Waiting for processes to exit. Jan 15 13:25:55.640612 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 13:25:55.644344 systemd-logind[1604]: Removed session 22. Jan 15 13:25:55.767242 systemd[1]: Started sshd@21-10.230.58.186:22-147.75.109.163:39430.service - OpenSSH per-connection server daemon (147.75.109.163:39430). Jan 15 13:25:56.662555 sshd[5762]: Accepted publickey for core from 147.75.109.163 port 39430 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:56.665861 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:56.673325 systemd-logind[1604]: New session 23 of user core. Jan 15 13:25:56.681394 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 13:25:57.922698 sshd[5762]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:57.927111 systemd[1]: sshd@21-10.230.58.186:22-147.75.109.163:39430.service: Deactivated successfully. Jan 15 13:25:57.932323 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 13:25:57.934439 systemd-logind[1604]: Session 23 logged out. Waiting for processes to exit. Jan 15 13:25:57.937266 systemd-logind[1604]: Removed session 23. Jan 15 13:25:58.072251 systemd[1]: Started sshd@22-10.230.58.186:22-147.75.109.163:48392.service - OpenSSH per-connection server daemon (147.75.109.163:48392). Jan 15 13:25:58.963784 sshd[5796]: Accepted publickey for core from 147.75.109.163 port 48392 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:25:58.966315 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:25:58.973213 systemd-logind[1604]: New session 24 of user core. Jan 15 13:25:58.979371 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 13:25:59.682035 sshd[5796]: pam_unix(sshd:session): session closed for user core Jan 15 13:25:59.686568 systemd[1]: sshd@22-10.230.58.186:22-147.75.109.163:48392.service: Deactivated successfully. Jan 15 13:25:59.693359 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 13:25:59.693797 systemd-logind[1604]: Session 24 logged out. Waiting for processes to exit. Jan 15 13:25:59.695674 systemd-logind[1604]: Removed session 24. Jan 15 13:26:04.838288 systemd[1]: Started sshd@23-10.230.58.186:22-147.75.109.163:48396.service - OpenSSH per-connection server daemon (147.75.109.163:48396). Jan 15 13:26:05.730828 sshd[5839]: Accepted publickey for core from 147.75.109.163 port 48396 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:26:05.733741 sshd[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:26:05.741984 systemd-logind[1604]: New session 25 of user core. Jan 15 13:26:05.748545 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 13:26:06.457635 sshd[5839]: pam_unix(sshd:session): session closed for user core Jan 15 13:26:06.462836 systemd[1]: sshd@23-10.230.58.186:22-147.75.109.163:48396.service: Deactivated successfully. Jan 15 13:26:06.469223 systemd-logind[1604]: Session 25 logged out. Waiting for processes to exit. Jan 15 13:26:06.469976 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 13:26:06.472333 systemd-logind[1604]: Removed session 25. Jan 15 13:26:11.612240 systemd[1]: Started sshd@24-10.230.58.186:22-147.75.109.163:59676.service - OpenSSH per-connection server daemon (147.75.109.163:59676). Jan 15 13:26:12.542361 sshd[5857]: Accepted publickey for core from 147.75.109.163 port 59676 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:26:12.545277 sshd[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:26:12.557274 systemd-logind[1604]: New session 26 of user core. Jan 15 13:26:12.562315 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 13:26:13.383196 sshd[5857]: pam_unix(sshd:session): session closed for user core Jan 15 13:26:13.390552 systemd[1]: sshd@24-10.230.58.186:22-147.75.109.163:59676.service: Deactivated successfully. Jan 15 13:26:13.394741 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 13:26:13.395191 systemd-logind[1604]: Session 26 logged out. Waiting for processes to exit. Jan 15 13:26:13.398547 systemd-logind[1604]: Removed session 26. Jan 15 13:26:18.536263 systemd[1]: Started sshd@25-10.230.58.186:22-147.75.109.163:39794.service - OpenSSH per-connection server daemon (147.75.109.163:39794). Jan 15 13:26:19.423112 sshd[5882]: Accepted publickey for core from 147.75.109.163 port 39794 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 13:26:19.425378 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 13:26:19.432198 systemd-logind[1604]: New session 27 of user core. Jan 15 13:26:19.439845 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 15 13:26:20.132024 sshd[5882]: pam_unix(sshd:session): session closed for user core Jan 15 13:26:20.138109 systemd[1]: sshd@25-10.230.58.186:22-147.75.109.163:39794.service: Deactivated successfully. Jan 15 13:26:20.141804 systemd[1]: session-27.scope: Deactivated successfully. Jan 15 13:26:20.141976 systemd-logind[1604]: Session 27 logged out. Waiting for processes to exit. Jan 15 13:26:20.144738 systemd-logind[1604]: Removed session 27.