Dec 13 05:54:14.034730 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 05:54:14.034775 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:54:14.034789 kernel: BIOS-provided physical RAM map: Dec 13 05:54:14.034805 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 05:54:14.034815 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 05:54:14.034825 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 05:54:14.034836 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 05:54:14.034846 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 05:54:14.034857 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 05:54:14.034867 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 05:54:14.034877 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 05:54:14.034887 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 05:54:14.034902 kernel: NX (Execute Disable) protection: active Dec 13 05:54:14.034913 kernel: APIC: Static calls initialized Dec 13 05:54:14.034925 kernel: SMBIOS 2.8 present. Dec 13 05:54:14.034937 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 05:54:14.034948 kernel: Hypervisor detected: KVM Dec 13 05:54:14.034964 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 05:54:14.034975 kernel: kvm-clock: using sched offset of 4341723748 cycles Dec 13 05:54:14.034987 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 05:54:14.034999 kernel: tsc: Detected 2499.998 MHz processor Dec 13 05:54:14.035010 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 05:54:14.035022 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 05:54:14.035033 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 05:54:14.035044 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 05:54:14.035055 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 05:54:14.035074 kernel: Using GB pages for direct mapping Dec 13 05:54:14.035085 kernel: ACPI: Early table checksum verification disabled Dec 13 05:54:14.035097 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 05:54:14.035108 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:54:14.035119 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:54:14.035143 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:54:14.035153 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 05:54:14.035174 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:54:14.035200 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:54:14.035217 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:54:14.035228 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:54:14.035239 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 05:54:14.035251 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 05:54:14.035262 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 05:54:14.035280 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 05:54:14.035292 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 05:54:14.035308 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 05:54:14.035320 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 05:54:14.035332 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 05:54:14.035343 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 05:54:14.035355 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 05:54:14.035367 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 05:54:14.035378 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 05:54:14.035394 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 05:54:14.035406 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 05:54:14.035418 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 05:54:14.035429 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 05:54:14.035441 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 05:54:14.035452 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 05:54:14.035464 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 05:54:14.035495 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 05:54:14.035510 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 05:54:14.035522 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 05:54:14.035540 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 05:54:14.035552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 05:54:14.035564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 05:54:14.035576 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 05:54:14.035588 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 05:54:14.035600 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 05:54:14.035612 kernel: Zone ranges: Dec 13 05:54:14.035624 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 05:54:14.035636 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 05:54:14.035652 kernel: Normal empty Dec 13 05:54:14.035664 kernel: Movable zone start for each node Dec 13 05:54:14.035676 kernel: Early memory node ranges Dec 13 05:54:14.035687 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 05:54:14.035699 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 05:54:14.035711 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 05:54:14.035722 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 05:54:14.035734 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 05:54:14.035746 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 05:54:14.035757 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 05:54:14.035774 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 05:54:14.035786 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 05:54:14.035798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 05:54:14.035810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 05:54:14.035822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 05:54:14.035833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 05:54:14.035845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 05:54:14.035857 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 05:54:14.035868 kernel: TSC deadline timer available Dec 13 05:54:14.035885 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 05:54:14.035908 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 05:54:14.035920 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 05:54:14.035931 kernel: Booting paravirtualized kernel on KVM Dec 13 05:54:14.035942 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 05:54:14.035954 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 13 05:54:14.035965 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 05:54:14.035977 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 05:54:14.035988 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 05:54:14.036004 kernel: kvm-guest: PV spinlocks enabled Dec 13 05:54:14.036015 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 05:54:14.036028 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:54:14.036040 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 05:54:14.036051 kernel: random: crng init done Dec 13 05:54:14.036063 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 05:54:14.036074 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 05:54:14.036085 kernel: Fallback order for Node 0: 0 Dec 13 05:54:14.036101 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 05:54:14.036113 kernel: Policy zone: DMA32 Dec 13 05:54:14.036124 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 05:54:14.036141 kernel: software IO TLB: area num 16. Dec 13 05:54:14.036174 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 194824K reserved, 0K cma-reserved) Dec 13 05:54:14.036188 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 05:54:14.036203 kernel: Kernel/User page tables isolation: enabled Dec 13 05:54:14.036215 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 05:54:14.036226 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 05:54:14.036244 kernel: Dynamic Preempt: voluntary Dec 13 05:54:14.036256 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 05:54:14.036268 kernel: rcu: RCU event tracing is enabled. Dec 13 05:54:14.036280 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 05:54:14.036293 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 05:54:14.036316 kernel: Rude variant of Tasks RCU enabled. Dec 13 05:54:14.036333 kernel: Tracing variant of Tasks RCU enabled. Dec 13 05:54:14.036346 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 05:54:14.036359 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 05:54:14.036371 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 05:54:14.036383 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 05:54:14.036396 kernel: Console: colour VGA+ 80x25 Dec 13 05:54:14.036412 kernel: printk: console [tty0] enabled Dec 13 05:54:14.036425 kernel: printk: console [ttyS0] enabled Dec 13 05:54:14.036437 kernel: ACPI: Core revision 20230628 Dec 13 05:54:14.036450 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 05:54:14.036462 kernel: x2apic enabled Dec 13 05:54:14.036547 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 05:54:14.036560 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 05:54:14.036573 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 05:54:14.036586 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 05:54:14.036598 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 05:54:14.036610 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 05:54:14.036623 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 05:54:14.036635 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 05:54:14.036647 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 05:54:14.036665 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 05:54:14.036678 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 05:54:14.036690 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 05:54:14.036702 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 05:54:14.036715 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 05:54:14.036730 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 05:54:14.036743 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 05:54:14.036755 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 05:54:14.036767 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 05:54:14.036780 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 05:54:14.036793 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 05:54:14.036810 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 05:54:14.036823 kernel: Freeing SMP alternatives memory: 32K Dec 13 05:54:14.036835 kernel: pid_max: default: 32768 minimum: 301 Dec 13 05:54:14.036856 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 05:54:14.036868 kernel: landlock: Up and running. Dec 13 05:54:14.036880 kernel: SELinux: Initializing. Dec 13 05:54:14.036893 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 05:54:14.036905 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 05:54:14.036918 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 05:54:14.036930 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:54:14.036943 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:54:14.036961 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:54:14.036974 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 05:54:14.036986 kernel: signal: max sigframe size: 1776 Dec 13 05:54:14.036998 kernel: rcu: Hierarchical SRCU implementation. Dec 13 05:54:14.037011 kernel: rcu: Max phase no-delay instances is 400. Dec 13 05:54:14.037027 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 05:54:14.037040 kernel: smp: Bringing up secondary CPUs ... Dec 13 05:54:14.037052 kernel: smpboot: x86: Booting SMP configuration: Dec 13 05:54:14.037064 kernel: .... node #0, CPUs: #1 Dec 13 05:54:14.037081 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 05:54:14.037094 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 05:54:14.037107 kernel: smpboot: Max logical packages: 16 Dec 13 05:54:14.037119 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 05:54:14.037132 kernel: devtmpfs: initialized Dec 13 05:54:14.037144 kernel: x86/mm: Memory block size: 128MB Dec 13 05:54:14.037159 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 05:54:14.037182 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 05:54:14.037196 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 05:54:14.037213 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 05:54:14.037226 kernel: audit: initializing netlink subsys (disabled) Dec 13 05:54:14.037239 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 05:54:14.037251 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 05:54:14.037264 kernel: audit: type=2000 audit(1734069252.365:1): state=initialized audit_enabled=0 res=1 Dec 13 05:54:14.037276 kernel: cpuidle: using governor menu Dec 13 05:54:14.037289 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 05:54:14.037301 kernel: dca service started, version 1.12.1 Dec 13 05:54:14.037313 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 05:54:14.037331 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 05:54:14.037344 kernel: PCI: Using configuration type 1 for base access Dec 13 05:54:14.037356 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 05:54:14.037369 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 05:54:14.037382 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 05:54:14.037394 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 05:54:14.037406 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 05:54:14.037419 kernel: ACPI: Added _OSI(Module Device) Dec 13 05:54:14.037431 kernel: ACPI: Added _OSI(Processor Device) Dec 13 05:54:14.037449 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 05:54:14.037461 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 05:54:14.037487 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 05:54:14.037500 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 05:54:14.037512 kernel: ACPI: Interpreter enabled Dec 13 05:54:14.037525 kernel: ACPI: PM: (supports S0 S5) Dec 13 05:54:14.037537 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 05:54:14.037550 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 05:54:14.037562 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 05:54:14.037580 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 05:54:14.037593 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 05:54:14.037826 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 05:54:14.038010 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 05:54:14.038186 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 05:54:14.038206 kernel: PCI host bridge to bus 0000:00 Dec 13 05:54:14.038379 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 05:54:14.040584 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 05:54:14.040751 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 05:54:14.040906 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 05:54:14.041058 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 05:54:14.041230 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 05:54:14.041382 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 05:54:14.041615 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 05:54:14.041816 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 05:54:14.041984 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 05:54:14.042151 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 05:54:14.042329 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 05:54:14.042522 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 05:54:14.042713 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 05:54:14.042894 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 05:54:14.043107 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 05:54:14.043298 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 05:54:14.044544 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 05:54:14.044738 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 05:54:14.044917 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 05:54:14.045097 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 05:54:14.045296 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 05:54:14.045462 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 05:54:14.045667 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 05:54:14.045831 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 05:54:14.046004 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 05:54:14.046189 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 05:54:14.046363 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 05:54:14.046543 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 05:54:14.046733 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 05:54:14.046901 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 05:54:14.047066 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 05:54:14.047244 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 05:54:14.047418 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 05:54:14.048636 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 05:54:14.048820 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 05:54:14.048990 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 05:54:14.049159 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 05:54:14.049353 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 05:54:14.049551 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 05:54:14.049755 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 05:54:14.049931 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 05:54:14.050099 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 05:54:14.050289 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 05:54:14.050463 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 05:54:14.051727 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 05:54:14.051923 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 05:54:14.052096 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 05:54:14.052277 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 05:54:14.052443 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:54:14.053628 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 05:54:14.053825 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 05:54:14.054020 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 05:54:14.054205 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 05:54:14.054376 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 05:54:14.054592 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 05:54:14.054763 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 05:54:14.054932 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 05:54:14.055096 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 05:54:14.055281 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:54:14.055462 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 05:54:14.055722 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 05:54:14.055891 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 05:54:14.056057 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 05:54:14.056235 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:54:14.056404 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 05:54:14.056597 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 05:54:14.056771 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:54:14.056959 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 05:54:14.057126 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 05:54:14.057304 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:54:14.057490 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 05:54:14.057670 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 05:54:14.057838 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:54:14.058013 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 05:54:14.058201 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 05:54:14.058368 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:54:14.058591 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 05:54:14.058754 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 05:54:14.058917 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:54:14.058937 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 05:54:14.058951 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 05:54:14.058963 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 05:54:14.058992 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 05:54:14.059005 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 05:54:14.059018 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 05:54:14.059031 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 05:54:14.059044 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 05:54:14.059056 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 05:54:14.059069 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 05:54:14.059081 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 05:54:14.059094 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 05:54:14.059112 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 05:54:14.059125 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 05:54:14.059138 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 05:54:14.059150 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 05:54:14.059163 kernel: iommu: Default domain type: Translated Dec 13 05:54:14.059187 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 05:54:14.059200 kernel: PCI: Using ACPI for IRQ routing Dec 13 05:54:14.059213 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 05:54:14.059225 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 05:54:14.059244 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 05:54:14.059406 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 05:54:14.059644 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 05:54:14.059820 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 05:54:14.059857 kernel: vgaarb: loaded Dec 13 05:54:14.059875 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 05:54:14.059887 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 05:54:14.059900 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 05:54:14.059913 kernel: pnp: PnP ACPI init Dec 13 05:54:14.060115 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 05:54:14.060137 kernel: pnp: PnP ACPI: found 5 devices Dec 13 05:54:14.060150 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 05:54:14.060163 kernel: NET: Registered PF_INET protocol family Dec 13 05:54:14.060187 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 05:54:14.060200 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 05:54:14.060213 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 05:54:14.060225 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 05:54:14.060246 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 05:54:14.060259 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 05:54:14.060272 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 05:54:14.060284 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 05:54:14.060297 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 05:54:14.060310 kernel: NET: Registered PF_XDP protocol family Dec 13 05:54:14.060516 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 05:54:14.060688 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 05:54:14.060860 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 05:54:14.061026 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 05:54:14.061202 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 05:54:14.061368 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 05:54:14.061555 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 05:54:14.061719 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 05:54:14.061891 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 05:54:14.062054 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 05:54:14.062240 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 05:54:14.062406 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 05:54:14.062686 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 05:54:14.062856 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 05:54:14.063021 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 05:54:14.063210 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 05:54:14.063407 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 05:54:14.063607 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 05:54:14.063772 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 05:54:14.063934 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 05:54:14.064099 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 05:54:14.064277 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:54:14.064445 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 05:54:14.064630 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 05:54:14.064806 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 05:54:14.064972 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:54:14.065137 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 05:54:14.065324 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 05:54:14.065547 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 05:54:14.065784 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:54:14.065981 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 05:54:14.066152 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 05:54:14.066338 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 05:54:14.066579 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:54:14.066744 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 05:54:14.066907 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 05:54:14.067080 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 05:54:14.067257 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:54:14.067421 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 05:54:14.067638 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 05:54:14.067853 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 05:54:14.068018 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:54:14.068198 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 05:54:14.068363 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 05:54:14.068562 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 05:54:14.068726 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:54:14.068889 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 05:54:14.069055 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 05:54:14.069233 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 05:54:14.069397 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:54:14.069575 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 05:54:14.069725 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 05:54:14.069886 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 05:54:14.070044 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 05:54:14.070231 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 05:54:14.070383 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 05:54:14.070591 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 05:54:14.070751 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 05:54:14.070910 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:54:14.071088 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 05:54:14.071290 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 05:54:14.071447 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 05:54:14.071626 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:54:14.071795 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 05:54:14.071964 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 05:54:14.072132 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:54:14.072325 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 05:54:14.072561 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 05:54:14.072720 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:54:14.072906 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 05:54:14.073074 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 05:54:14.073250 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:54:14.073414 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 05:54:14.073624 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 05:54:14.073792 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:54:14.073960 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 05:54:14.074111 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 05:54:14.074289 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:54:14.074452 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 05:54:14.074653 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 05:54:14.074805 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:54:14.074824 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 05:54:14.074838 kernel: PCI: CLS 0 bytes, default 64 Dec 13 05:54:14.074851 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 05:54:14.074864 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 05:54:14.074876 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 05:54:14.074889 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 05:54:14.074902 kernel: Initialise system trusted keyrings Dec 13 05:54:14.074942 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 05:54:14.074955 kernel: Key type asymmetric registered Dec 13 05:54:14.074968 kernel: Asymmetric key parser 'x509' registered Dec 13 05:54:14.074981 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 05:54:14.075001 kernel: io scheduler mq-deadline registered Dec 13 05:54:14.075026 kernel: io scheduler kyber registered Dec 13 05:54:14.075039 kernel: io scheduler bfq registered Dec 13 05:54:14.075213 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 05:54:14.075380 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 05:54:14.075570 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:54:14.075737 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 05:54:14.075941 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 05:54:14.076111 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:54:14.076293 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 05:54:14.076461 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 05:54:14.076682 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:54:14.076856 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 05:54:14.077022 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 05:54:14.077219 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:54:14.077384 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 05:54:14.077562 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 05:54:14.077735 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:54:14.077900 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 05:54:14.078065 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 05:54:14.078258 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:54:14.078426 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 05:54:14.078636 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 05:54:14.078810 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:54:14.078980 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 05:54:14.079149 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 05:54:14.079326 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:54:14.079348 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 05:54:14.079362 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 05:54:14.079382 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 05:54:14.079397 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 05:54:14.079410 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 05:54:14.079424 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 05:54:14.079437 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 05:54:14.079451 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 05:54:14.079627 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 05:54:14.079649 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 05:54:14.079829 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 05:54:14.079984 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T05:54:13 UTC (1734069253) Dec 13 05:54:14.080137 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 05:54:14.080174 kernel: intel_pstate: CPU model not supported Dec 13 05:54:14.080197 kernel: NET: Registered PF_INET6 protocol family Dec 13 05:54:14.080211 kernel: Segment Routing with IPv6 Dec 13 05:54:14.080224 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 05:54:14.080238 kernel: NET: Registered PF_PACKET protocol family Dec 13 05:54:14.080251 kernel: Key type dns_resolver registered Dec 13 05:54:14.080269 kernel: IPI shorthand broadcast: enabled Dec 13 05:54:14.080283 kernel: sched_clock: Marking stable (1176003952, 243961111)->(1650213325, -230248262) Dec 13 05:54:14.080296 kernel: registered taskstats version 1 Dec 13 05:54:14.080310 kernel: Loading compiled-in X.509 certificates Dec 13 05:54:14.080323 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 05:54:14.080336 kernel: Key type .fscrypt registered Dec 13 05:54:14.080349 kernel: Key type fscrypt-provisioning registered Dec 13 05:54:14.080363 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 05:54:14.080376 kernel: ima: Allocated hash algorithm: sha1 Dec 13 05:54:14.080394 kernel: ima: No architecture policies found Dec 13 05:54:14.080407 kernel: clk: Disabling unused clocks Dec 13 05:54:14.080421 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 05:54:14.080434 kernel: Write protecting the kernel read-only data: 36864k Dec 13 05:54:14.080447 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 05:54:14.080461 kernel: Run /init as init process Dec 13 05:54:14.080519 kernel: with arguments: Dec 13 05:54:14.080533 kernel: /init Dec 13 05:54:14.080546 kernel: with environment: Dec 13 05:54:14.080566 kernel: HOME=/ Dec 13 05:54:14.080579 kernel: TERM=linux Dec 13 05:54:14.080592 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 05:54:14.080608 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 05:54:14.080624 systemd[1]: Detected virtualization kvm. Dec 13 05:54:14.080639 systemd[1]: Detected architecture x86-64. Dec 13 05:54:14.080652 systemd[1]: Running in initrd. Dec 13 05:54:14.080666 systemd[1]: No hostname configured, using default hostname. Dec 13 05:54:14.080685 systemd[1]: Hostname set to . Dec 13 05:54:14.080700 systemd[1]: Initializing machine ID from VM UUID. Dec 13 05:54:14.080714 systemd[1]: Queued start job for default target initrd.target. Dec 13 05:54:14.080728 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:54:14.080743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:54:14.080757 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 05:54:14.080772 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 05:54:14.080786 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 05:54:14.080806 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 05:54:14.080830 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 05:54:14.080845 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 05:54:14.080859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:54:14.080873 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:54:14.080887 systemd[1]: Reached target paths.target - Path Units. Dec 13 05:54:14.080908 systemd[1]: Reached target slices.target - Slice Units. Dec 13 05:54:14.080922 systemd[1]: Reached target swap.target - Swaps. Dec 13 05:54:14.080936 systemd[1]: Reached target timers.target - Timer Units. Dec 13 05:54:14.080951 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 05:54:14.080965 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 05:54:14.080979 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 05:54:14.080994 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 05:54:14.081008 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:54:14.081022 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 05:54:14.081042 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:54:14.081061 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 05:54:14.081075 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 05:54:14.081090 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 05:54:14.081104 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 05:54:14.081124 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 05:54:14.081138 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 05:54:14.081152 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 05:54:14.081184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:54:14.081253 systemd-journald[201]: Collecting audit messages is disabled. Dec 13 05:54:14.081287 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 05:54:14.081302 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:54:14.081323 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 05:54:14.081343 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 05:54:14.081359 systemd-journald[201]: Journal started Dec 13 05:54:14.081390 systemd-journald[201]: Runtime Journal (/run/log/journal/2c9630c6cbbf443fb84b911a636b1705) is 4.7M, max 38.0M, 33.2M free. Dec 13 05:54:14.060009 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 05:54:14.155509 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 05:54:14.155544 kernel: Bridge firewalling registered Dec 13 05:54:14.113051 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 05:54:14.164485 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 05:54:14.164872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 05:54:14.165912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:54:14.175655 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:54:14.181670 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:54:14.186664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 05:54:14.191527 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 05:54:14.200851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 05:54:14.204263 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:54:14.215316 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:54:14.217485 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:54:14.227263 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 05:54:14.231665 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 05:54:14.234201 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:54:14.242481 dracut-cmdline[233]: dracut-dracut-053 Dec 13 05:54:14.244130 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:54:14.283998 systemd-resolved[234]: Positive Trust Anchors: Dec 13 05:54:14.284017 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 05:54:14.284061 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 05:54:14.293152 systemd-resolved[234]: Defaulting to hostname 'linux'. Dec 13 05:54:14.295938 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 05:54:14.297138 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:54:14.348538 kernel: SCSI subsystem initialized Dec 13 05:54:14.360502 kernel: Loading iSCSI transport class v2.0-870. Dec 13 05:54:14.373500 kernel: iscsi: registered transport (tcp) Dec 13 05:54:14.400974 kernel: iscsi: registered transport (qla4xxx) Dec 13 05:54:14.401049 kernel: QLogic iSCSI HBA Driver Dec 13 05:54:14.456305 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 05:54:14.465768 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 05:54:14.510027 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 05:54:14.510094 kernel: device-mapper: uevent: version 1.0.3 Dec 13 05:54:14.510916 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 05:54:14.560531 kernel: raid6: sse2x4 gen() 12770 MB/s Dec 13 05:54:14.578634 kernel: raid6: sse2x2 gen() 9046 MB/s Dec 13 05:54:14.597224 kernel: raid6: sse2x1 gen() 9287 MB/s Dec 13 05:54:14.597263 kernel: raid6: using algorithm sse2x4 gen() 12770 MB/s Dec 13 05:54:14.616196 kernel: raid6: .... xor() 7437 MB/s, rmw enabled Dec 13 05:54:14.616248 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 05:54:14.642544 kernel: xor: automatically using best checksumming function avx Dec 13 05:54:14.836516 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 05:54:14.851572 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 05:54:14.858697 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:54:14.886994 systemd-udevd[419]: Using default interface naming scheme 'v255'. Dec 13 05:54:14.894222 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:54:14.903692 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 05:54:14.926102 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Dec 13 05:54:14.965579 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 05:54:14.972715 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 05:54:15.081486 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:54:15.088648 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 05:54:15.126352 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 05:54:15.129899 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 05:54:15.131935 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:54:15.132762 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 05:54:15.139898 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 05:54:15.167777 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 05:54:15.208521 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 13 05:54:15.296509 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 05:54:15.296547 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 05:54:15.296749 kernel: ACPI: bus type USB registered Dec 13 05:54:15.296780 kernel: usbcore: registered new interface driver usbfs Dec 13 05:54:15.296799 kernel: usbcore: registered new interface driver hub Dec 13 05:54:15.296829 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 05:54:15.296847 kernel: GPT:17805311 != 125829119 Dec 13 05:54:15.296864 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 05:54:15.296882 kernel: GPT:17805311 != 125829119 Dec 13 05:54:15.296899 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 05:54:15.296916 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:54:15.296934 kernel: AVX version of gcm_enc/dec engaged. Dec 13 05:54:15.296956 kernel: AES CTR mode by8 optimization enabled Dec 13 05:54:15.296975 kernel: usbcore: registered new device driver usb Dec 13 05:54:15.238831 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 05:54:15.409185 kernel: libata version 3.00 loaded. Dec 13 05:54:15.409225 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 05:54:15.409493 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 05:54:15.409518 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 05:54:15.409730 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 05:54:15.409930 kernel: scsi host0: ahci Dec 13 05:54:15.410199 kernel: scsi host1: ahci Dec 13 05:54:15.410408 kernel: scsi host2: ahci Dec 13 05:54:15.410669 kernel: scsi host3: ahci Dec 13 05:54:15.410869 kernel: scsi host4: ahci Dec 13 05:54:15.411073 kernel: scsi host5: ahci Dec 13 05:54:15.411289 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 05:54:15.411311 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 05:54:15.411329 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 05:54:15.411347 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 05:54:15.411365 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 05:54:15.411383 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 05:54:15.411401 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (467) Dec 13 05:54:15.239016 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:54:15.240028 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:54:15.240823 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 05:54:15.240994 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:54:15.241780 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:54:15.252808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:54:15.428368 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (470) Dec 13 05:54:15.403956 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 05:54:15.410259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:54:15.417506 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 05:54:15.434632 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:54:15.445713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 05:54:15.452329 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 05:54:15.453199 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 05:54:15.461907 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 05:54:15.463364 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:54:15.480567 disk-uuid[571]: Primary Header is updated. Dec 13 05:54:15.480567 disk-uuid[571]: Secondary Entries is updated. Dec 13 05:54:15.480567 disk-uuid[571]: Secondary Header is updated. Dec 13 05:54:15.486482 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:54:15.493497 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:54:15.692497 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 05:54:15.692570 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 05:54:15.694745 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 05:54:15.697418 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 05:54:15.701038 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 05:54:15.701190 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 05:54:15.712577 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 05:54:15.730005 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 05:54:15.730250 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 05:54:15.730455 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 05:54:15.730950 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 05:54:15.731202 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 05:54:15.731407 kernel: hub 1-0:1.0: USB hub found Dec 13 05:54:15.731659 kernel: hub 1-0:1.0: 4 ports detected Dec 13 05:54:15.731857 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 05:54:15.732152 kernel: hub 2-0:1.0: USB hub found Dec 13 05:54:15.732372 kernel: hub 2-0:1.0: 4 ports detected Dec 13 05:54:15.968568 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 05:54:16.110513 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 05:54:16.116861 kernel: usbcore: registered new interface driver usbhid Dec 13 05:54:16.116905 kernel: usbhid: USB HID core driver Dec 13 05:54:16.124217 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 05:54:16.124262 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 05:54:16.496647 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:54:16.497635 disk-uuid[572]: The operation has completed successfully. Dec 13 05:54:16.551417 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 05:54:16.551609 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 05:54:16.571679 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 05:54:16.576148 sh[583]: Success Dec 13 05:54:16.592505 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 05:54:16.652711 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 05:54:16.663611 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 05:54:16.666922 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 05:54:16.690732 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 05:54:16.690781 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:54:16.693023 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 05:54:16.696446 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 05:54:16.696500 kernel: BTRFS info (device dm-0): using free space tree Dec 13 05:54:16.705787 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 05:54:16.707216 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 05:54:16.714772 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 05:54:16.718378 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 05:54:16.732548 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:54:16.732596 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:54:16.735431 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:54:16.742503 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:54:16.756991 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 05:54:16.760655 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:54:16.769406 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 05:54:16.775660 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 05:54:16.883316 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 05:54:16.896709 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 05:54:16.929668 ignition[674]: Ignition 2.19.0 Dec 13 05:54:16.929689 ignition[674]: Stage: fetch-offline Dec 13 05:54:16.929799 ignition[674]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:54:16.929825 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:54:16.929997 ignition[674]: parsed url from cmdline: "" Dec 13 05:54:16.933938 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 05:54:16.930004 ignition[674]: no config URL provided Dec 13 05:54:16.930013 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 05:54:16.937807 systemd-networkd[765]: lo: Link UP Dec 13 05:54:16.930030 ignition[674]: no config at "/usr/lib/ignition/user.ign" Dec 13 05:54:16.937813 systemd-networkd[765]: lo: Gained carrier Dec 13 05:54:16.930038 ignition[674]: failed to fetch config: resource requires networking Dec 13 05:54:16.940175 systemd-networkd[765]: Enumeration completed Dec 13 05:54:16.930540 ignition[674]: Ignition finished successfully Dec 13 05:54:16.940353 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 05:54:16.940769 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:54:16.940775 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 05:54:16.942033 systemd-networkd[765]: eth0: Link UP Dec 13 05:54:16.942038 systemd-networkd[765]: eth0: Gained carrier Dec 13 05:54:16.942051 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:54:16.944190 systemd[1]: Reached target network.target - Network. Dec 13 05:54:16.952684 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 05:54:16.969828 systemd-networkd[765]: eth0: DHCPv4 address 10.230.15.170/30, gateway 10.230.15.169 acquired from 10.230.15.169 Dec 13 05:54:16.974279 ignition[772]: Ignition 2.19.0 Dec 13 05:54:16.974296 ignition[772]: Stage: fetch Dec 13 05:54:16.976516 ignition[772]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:54:16.976557 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:54:16.976727 ignition[772]: parsed url from cmdline: "" Dec 13 05:54:16.976733 ignition[772]: no config URL provided Dec 13 05:54:16.976743 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 05:54:16.976758 ignition[772]: no config at "/usr/lib/ignition/user.ign" Dec 13 05:54:16.977038 ignition[772]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 05:54:16.978413 ignition[772]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 05:54:16.978429 ignition[772]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 05:54:16.993599 ignition[772]: GET result: OK Dec 13 05:54:16.994366 ignition[772]: parsing config with SHA512: 26ac9c780f86523d420140801c247e59b223e15c9f418c0955644982521da7d63c89942f8e640c7adb1e1e6f2c8acdbf962b722e035569cab4d7df256ac995a0 Dec 13 05:54:16.999824 unknown[772]: fetched base config from "system" Dec 13 05:54:16.999840 unknown[772]: fetched base config from "system" Dec 13 05:54:17.000656 ignition[772]: fetch: fetch complete Dec 13 05:54:16.999864 unknown[772]: fetched user config from "openstack" Dec 13 05:54:17.000664 ignition[772]: fetch: fetch passed Dec 13 05:54:17.002442 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 05:54:17.000729 ignition[772]: Ignition finished successfully Dec 13 05:54:17.013831 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 05:54:17.035955 ignition[779]: Ignition 2.19.0 Dec 13 05:54:17.035968 ignition[779]: Stage: kargs Dec 13 05:54:17.036294 ignition[779]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:54:17.038839 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 05:54:17.036315 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:54:17.037450 ignition[779]: kargs: kargs passed Dec 13 05:54:17.037553 ignition[779]: Ignition finished successfully Dec 13 05:54:17.044679 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 05:54:17.066222 ignition[786]: Ignition 2.19.0 Dec 13 05:54:17.066243 ignition[786]: Stage: disks Dec 13 05:54:17.066463 ignition[786]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:54:17.066497 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:54:17.069019 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 05:54:17.067568 ignition[786]: disks: disks passed Dec 13 05:54:17.070896 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 05:54:17.067643 ignition[786]: Ignition finished successfully Dec 13 05:54:17.071810 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 05:54:17.073330 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 05:54:17.074631 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 05:54:17.076137 systemd[1]: Reached target basic.target - Basic System. Dec 13 05:54:17.083656 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 05:54:17.102503 systemd-fsck[794]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 05:54:17.105819 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 05:54:17.110780 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 05:54:17.230505 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 05:54:17.231111 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 05:54:17.232456 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 05:54:17.246619 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 05:54:17.249676 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 05:54:17.251293 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 05:54:17.253523 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 05:54:17.255616 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 05:54:17.269721 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Dec 13 05:54:17.269777 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:54:17.269798 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:54:17.269816 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:54:17.255655 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 05:54:17.273500 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:54:17.278871 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 05:54:17.281178 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 05:54:17.287663 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 05:54:17.369547 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 05:54:17.377346 initrd-setup-root[838]: cut: /sysroot/etc/group: No such file or directory Dec 13 05:54:17.384663 initrd-setup-root[845]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 05:54:17.392469 initrd-setup-root[852]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 05:54:17.496216 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 05:54:17.502645 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 05:54:17.511730 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 05:54:17.523479 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:54:17.543236 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 05:54:17.557663 ignition[921]: INFO : Ignition 2.19.0 Dec 13 05:54:17.557663 ignition[921]: INFO : Stage: mount Dec 13 05:54:17.560198 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:54:17.560198 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:54:17.560198 ignition[921]: INFO : mount: mount passed Dec 13 05:54:17.560198 ignition[921]: INFO : Ignition finished successfully Dec 13 05:54:17.561407 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 05:54:17.688157 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 05:54:18.268761 systemd-networkd[765]: eth0: Gained IPv6LL Dec 13 05:54:19.389766 systemd-networkd[765]: eth0: Ignoring DHCPv6 address 2a02:1348:179:83ea:24:19ff:fee6:faa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:83ea:24:19ff:fee6:faa/64 assigned by NDisc. Dec 13 05:54:19.389783 systemd-networkd[765]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 05:54:24.441985 coreos-metadata[804]: Dec 13 05:54:24.441 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:54:24.464044 coreos-metadata[804]: Dec 13 05:54:24.463 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 05:54:24.479142 coreos-metadata[804]: Dec 13 05:54:24.479 INFO Fetch successful Dec 13 05:54:24.480569 coreos-metadata[804]: Dec 13 05:54:24.479 INFO wrote hostname srv-kh3sk.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 05:54:24.482072 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 05:54:24.482349 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 05:54:24.492626 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 05:54:24.525698 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 05:54:24.539493 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (936) Dec 13 05:54:24.539573 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:54:24.542871 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:54:24.542930 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:54:24.548504 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:54:24.551632 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 05:54:24.580747 ignition[954]: INFO : Ignition 2.19.0 Dec 13 05:54:24.580747 ignition[954]: INFO : Stage: files Dec 13 05:54:24.582499 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:54:24.582499 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:54:24.582499 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Dec 13 05:54:24.586276 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 05:54:24.586276 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 05:54:24.589887 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 05:54:24.591155 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 05:54:24.592857 unknown[954]: wrote ssh authorized keys file for user: core Dec 13 05:54:24.594030 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 05:54:24.596004 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 05:54:24.597373 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 05:54:24.794295 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 05:54:25.478675 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 05:54:25.480446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 05:54:25.480446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 05:54:25.480446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 05:54:25.480446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 05:54:25.480446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 05:54:25.480446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 05:54:25.480446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 05:54:25.480446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 05:54:25.480446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 05:54:25.480446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 05:54:25.480446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 05:54:25.499402 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 05:54:25.499402 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 05:54:25.499402 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 05:54:26.005091 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 05:54:27.327924 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 05:54:27.329949 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 05:54:27.329949 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 05:54:27.329949 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 05:54:27.329949 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 05:54:27.329949 ignition[954]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 05:54:27.329949 ignition[954]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 05:54:27.338452 ignition[954]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 05:54:27.338452 ignition[954]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 05:54:27.338452 ignition[954]: INFO : files: files passed Dec 13 05:54:27.338452 ignition[954]: INFO : Ignition finished successfully Dec 13 05:54:27.332222 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 05:54:27.343738 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 05:54:27.348593 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 05:54:27.354666 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 05:54:27.355641 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 05:54:27.366375 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:54:27.366375 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:54:27.368885 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:54:27.371036 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 05:54:27.372395 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 05:54:27.380688 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 05:54:27.418265 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 05:54:27.418447 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 05:54:27.420284 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 05:54:27.421630 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 05:54:27.423263 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 05:54:27.430650 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 05:54:27.447620 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 05:54:27.454704 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 05:54:27.475877 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:54:27.477887 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:54:27.478782 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 05:54:27.480383 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 05:54:27.480574 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 05:54:27.482407 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 05:54:27.483328 systemd[1]: Stopped target basic.target - Basic System. Dec 13 05:54:27.484926 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 05:54:27.486396 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 05:54:27.487896 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 05:54:27.489448 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 05:54:27.491126 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 05:54:27.492743 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 05:54:27.494263 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 05:54:27.495944 systemd[1]: Stopped target swap.target - Swaps. Dec 13 05:54:27.497396 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 05:54:27.497606 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 05:54:27.499358 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:54:27.500420 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:54:27.501915 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 05:54:27.502056 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:54:27.503459 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 05:54:27.503632 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 05:54:27.505787 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 05:54:27.505981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 05:54:27.507774 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 05:54:27.507951 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 05:54:27.515839 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 05:54:27.516680 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 05:54:27.516945 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:54:27.526015 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 05:54:27.527480 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 05:54:27.527650 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:54:27.529438 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 05:54:27.529615 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 05:54:27.539821 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 05:54:27.540068 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 05:54:27.558171 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 05:54:27.560688 ignition[1006]: INFO : Ignition 2.19.0 Dec 13 05:54:27.560688 ignition[1006]: INFO : Stage: umount Dec 13 05:54:27.562338 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:54:27.562338 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:54:27.562338 ignition[1006]: INFO : umount: umount passed Dec 13 05:54:27.562338 ignition[1006]: INFO : Ignition finished successfully Dec 13 05:54:27.563324 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 05:54:27.563476 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 05:54:27.565637 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 05:54:27.565765 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 05:54:27.567405 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 05:54:27.567524 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 05:54:27.568820 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 05:54:27.568888 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 05:54:27.570171 systemd[1]: Stopped target network.target - Network. Dec 13 05:54:27.571510 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 05:54:27.571576 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 05:54:27.573048 systemd[1]: Stopped target paths.target - Path Units. Dec 13 05:54:27.574290 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 05:54:27.574365 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:54:27.575823 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 05:54:27.577186 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 05:54:27.578584 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 05:54:27.578646 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 05:54:27.579337 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 05:54:27.579397 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 05:54:27.580708 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 05:54:27.580773 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 05:54:27.582721 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 05:54:27.582818 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 05:54:27.584529 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 05:54:27.586329 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 05:54:27.588629 systemd-networkd[765]: eth0: DHCPv6 lease lost Dec 13 05:54:27.591202 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 05:54:27.591374 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 05:54:27.593314 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 05:54:27.593446 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:54:27.603628 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 05:54:27.604938 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 05:54:27.605015 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 05:54:27.607210 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:54:27.613845 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 05:54:27.614024 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 05:54:27.624887 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 05:54:27.625986 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:54:27.630540 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 05:54:27.630640 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 05:54:27.632281 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 05:54:27.632344 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:54:27.634293 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 05:54:27.634365 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 05:54:27.636567 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 05:54:27.636643 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 05:54:27.638725 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 05:54:27.638809 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:54:27.647765 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 05:54:27.648581 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 05:54:27.648667 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:54:27.652233 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 05:54:27.652336 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 05:54:27.654819 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 05:54:27.654892 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:54:27.658230 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 05:54:27.658314 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:54:27.659120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 05:54:27.659187 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:54:27.660506 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 05:54:27.660673 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 05:54:27.664772 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 05:54:27.664913 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 05:54:27.718262 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 05:54:27.718448 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 05:54:27.720955 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 05:54:27.721745 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 05:54:27.721862 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 05:54:27.731767 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 05:54:27.741959 systemd[1]: Switching root. Dec 13 05:54:27.774344 systemd-journald[201]: Journal stopped Dec 13 05:54:29.148209 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 05:54:29.148413 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 05:54:29.148452 kernel: SELinux: policy capability open_perms=1 Dec 13 05:54:29.149547 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 05:54:29.149603 kernel: SELinux: policy capability always_check_network=0 Dec 13 05:54:29.149645 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 05:54:29.149666 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 05:54:29.149691 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 05:54:29.149716 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 05:54:29.149773 kernel: audit: type=1403 audit(1734069267.997:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 05:54:29.149807 systemd[1]: Successfully loaded SELinux policy in 48.812ms. Dec 13 05:54:29.149866 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.298ms. Dec 13 05:54:29.149907 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 05:54:29.149930 systemd[1]: Detected virtualization kvm. Dec 13 05:54:29.149957 systemd[1]: Detected architecture x86-64. Dec 13 05:54:29.149985 systemd[1]: Detected first boot. Dec 13 05:54:29.150012 systemd[1]: Hostname set to . Dec 13 05:54:29.150046 systemd[1]: Initializing machine ID from VM UUID. Dec 13 05:54:29.150076 zram_generator::config[1050]: No configuration found. Dec 13 05:54:29.150107 systemd[1]: Populated /etc with preset unit settings. Dec 13 05:54:29.150147 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 05:54:29.150168 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 05:54:29.150194 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 05:54:29.150216 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 05:54:29.150256 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 05:54:29.150276 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 05:54:29.150309 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 05:54:29.150335 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 05:54:29.150381 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 05:54:29.150426 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 05:54:29.150453 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 05:54:29.150494 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:54:29.150533 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:54:29.150563 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 05:54:29.150584 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 05:54:29.150605 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 05:54:29.150634 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 05:54:29.150670 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 05:54:29.150715 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:54:29.150763 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 05:54:29.150787 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 05:54:29.150807 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 05:54:29.150827 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 05:54:29.150859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:54:29.150888 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 05:54:29.150909 systemd[1]: Reached target slices.target - Slice Units. Dec 13 05:54:29.150938 systemd[1]: Reached target swap.target - Swaps. Dec 13 05:54:29.150959 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 05:54:29.150980 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 05:54:29.150999 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:54:29.151031 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 05:54:29.151058 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:54:29.151093 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 05:54:29.151126 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 05:54:29.151170 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 05:54:29.151191 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 05:54:29.151222 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:54:29.151242 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 05:54:29.151874 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 05:54:29.151917 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 05:54:29.151942 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 05:54:29.151978 systemd[1]: Reached target machines.target - Containers. Dec 13 05:54:29.152006 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 05:54:29.152028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:54:29.152071 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 05:54:29.152092 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 05:54:29.152123 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:54:29.152154 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 05:54:29.152187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:54:29.152207 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 05:54:29.152225 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 05:54:29.152244 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 05:54:29.152263 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 05:54:29.152282 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 05:54:29.152319 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 05:54:29.152351 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 05:54:29.152397 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 05:54:29.152419 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 05:54:29.152439 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 05:54:29.152485 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 05:54:29.152510 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 05:54:29.152540 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 05:54:29.152588 systemd[1]: Stopped verity-setup.service. Dec 13 05:54:29.152616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:54:29.152651 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 05:54:29.152682 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 05:54:29.152703 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 05:54:29.152724 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 05:54:29.152751 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 05:54:29.152828 systemd-journald[1140]: Collecting audit messages is disabled. Dec 13 05:54:29.152897 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 05:54:29.152921 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 05:54:29.152948 systemd-journald[1140]: Journal started Dec 13 05:54:29.152982 systemd-journald[1140]: Runtime Journal (/run/log/journal/2c9630c6cbbf443fb84b911a636b1705) is 4.7M, max 38.0M, 33.2M free. Dec 13 05:54:28.791639 systemd[1]: Queued start job for default target multi-user.target. Dec 13 05:54:28.815447 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 05:54:28.816282 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 05:54:29.164398 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 05:54:29.164453 kernel: loop: module loaded Dec 13 05:54:29.166309 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:54:29.167661 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 05:54:29.167973 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 05:54:29.169192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:54:29.169618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:54:29.171080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:54:29.171665 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:54:29.172984 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 05:54:29.173225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 05:54:29.175370 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 05:54:29.176771 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 05:54:29.178019 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 05:54:29.182489 kernel: fuse: init (API version 7.39) Dec 13 05:54:29.186194 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 05:54:29.186578 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 05:54:29.204855 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 05:54:29.216587 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 05:54:29.227987 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 05:54:29.228915 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 05:54:29.228968 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 05:54:29.231207 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 05:54:29.250682 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 05:54:29.273747 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 05:54:29.274682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:54:29.277602 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 05:54:29.283576 kernel: ACPI: bus type drm_connector registered Dec 13 05:54:29.283760 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 05:54:29.284566 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 05:54:29.289735 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 05:54:29.291152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 05:54:29.301687 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:54:29.307242 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 05:54:29.317626 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 05:54:29.321793 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 05:54:29.325572 systemd-journald[1140]: Time spent on flushing to /var/log/journal/2c9630c6cbbf443fb84b911a636b1705 is 155.232ms for 1134 entries. Dec 13 05:54:29.325572 systemd-journald[1140]: System Journal (/var/log/journal/2c9630c6cbbf443fb84b911a636b1705) is 8.0M, max 584.8M, 576.8M free. Dec 13 05:54:29.500074 systemd-journald[1140]: Received client request to flush runtime journal. Dec 13 05:54:29.500147 kernel: loop0: detected capacity change from 0 to 8 Dec 13 05:54:29.500207 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 05:54:29.500231 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 05:54:29.323612 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 05:54:29.324902 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 05:54:29.328807 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 05:54:29.334162 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 05:54:29.357719 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 05:54:29.372409 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 05:54:29.380716 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 05:54:29.406371 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:54:29.451360 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 05:54:29.453610 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 05:54:29.492615 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 05:54:29.502216 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 05:54:29.503866 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 05:54:29.524953 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:54:29.535800 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 05:54:29.539487 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 05:54:29.604179 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 05:54:29.609118 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 13 05:54:29.609145 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 13 05:54:29.628600 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 05:54:29.625161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:54:29.694588 kernel: loop4: detected capacity change from 0 to 8 Dec 13 05:54:29.702777 kernel: loop5: detected capacity change from 0 to 205544 Dec 13 05:54:29.732536 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 05:54:29.765000 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 05:54:29.799884 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 05:54:29.801004 (sd-merge)[1209]: Merged extensions into '/usr'. Dec 13 05:54:29.808583 systemd[1]: Reloading requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 05:54:29.808769 systemd[1]: Reloading... Dec 13 05:54:29.913543 zram_generator::config[1235]: No configuration found. Dec 13 05:54:30.097714 ldconfig[1177]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 05:54:30.225556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:54:30.295328 systemd[1]: Reloading finished in 485 ms. Dec 13 05:54:30.326661 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 05:54:30.328422 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 05:54:30.341693 systemd[1]: Starting ensure-sysext.service... Dec 13 05:54:30.344851 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 05:54:30.359635 systemd[1]: Reloading requested from client PID 1291 ('systemctl') (unit ensure-sysext.service)... Dec 13 05:54:30.359656 systemd[1]: Reloading... Dec 13 05:54:30.430371 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 05:54:30.437274 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 05:54:30.442281 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 05:54:30.445032 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Dec 13 05:54:30.450566 zram_generator::config[1315]: No configuration found. Dec 13 05:54:30.448685 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Dec 13 05:54:30.467835 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 05:54:30.467978 systemd-tmpfiles[1292]: Skipping /boot Dec 13 05:54:30.494484 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 05:54:30.494729 systemd-tmpfiles[1292]: Skipping /boot Dec 13 05:54:30.645872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:54:30.715849 systemd[1]: Reloading finished in 355 ms. Dec 13 05:54:30.741020 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 05:54:30.754397 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:54:30.773780 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 05:54:30.778782 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 05:54:30.786743 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 05:54:30.793005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 05:54:30.804732 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:54:30.814413 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 05:54:30.822640 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:54:30.822940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:54:30.833807 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:54:30.839601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:54:30.845589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 05:54:30.847738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:54:30.847936 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:54:30.859761 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 05:54:30.861987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:54:30.863610 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:54:30.870767 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:54:30.871070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:54:30.878840 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:54:30.881732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:54:30.881903 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:54:30.887603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:54:30.887959 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:54:30.897863 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 05:54:30.899765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:54:30.899960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:54:30.902625 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 05:54:30.908607 systemd[1]: Finished ensure-sysext.service. Dec 13 05:54:30.911219 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 05:54:30.935835 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 05:54:30.940351 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 05:54:30.941843 systemd-udevd[1381]: Using default interface naming scheme 'v255'. Dec 13 05:54:30.941932 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 05:54:30.947959 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 05:54:30.956383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:54:30.956637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:54:30.974019 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:54:30.975197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:54:30.977151 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 05:54:30.978587 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 05:54:30.979842 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 05:54:30.980112 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 05:54:30.985551 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 05:54:30.985666 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 05:54:30.995738 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:54:31.009765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 05:54:31.013954 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 05:54:31.037128 augenrules[1424]: No rules Dec 13 05:54:31.041981 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 05:54:31.058167 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 05:54:31.195872 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1421) Dec 13 05:54:31.232838 systemd-resolved[1380]: Positive Trust Anchors: Dec 13 05:54:31.235539 systemd-resolved[1380]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 05:54:31.235592 systemd-resolved[1380]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 05:54:31.240616 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 05:54:31.242484 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 05:54:31.246750 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 05:54:31.252662 systemd-networkd[1415]: lo: Link UP Dec 13 05:54:31.252677 systemd-networkd[1415]: lo: Gained carrier Dec 13 05:54:31.254237 systemd-resolved[1380]: Using system hostname 'srv-kh3sk.gb1.brightbox.com'. Dec 13 05:54:31.255272 systemd-networkd[1415]: Enumeration completed Dec 13 05:54:31.256126 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 05:54:31.257100 systemd-timesyncd[1399]: No network connectivity, watching for changes. Dec 13 05:54:31.268621 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1421) Dec 13 05:54:31.268718 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 05:54:31.270448 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 05:54:31.273075 systemd[1]: Reached target network.target - Network. Dec 13 05:54:31.273786 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:54:31.329583 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1442) Dec 13 05:54:31.370655 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:54:31.370678 systemd-networkd[1415]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 05:54:31.374541 systemd-networkd[1415]: eth0: Link UP Dec 13 05:54:31.374555 systemd-networkd[1415]: eth0: Gained carrier Dec 13 05:54:31.374574 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:54:31.398596 systemd-networkd[1415]: eth0: DHCPv4 address 10.230.15.170/30, gateway 10.230.15.169 acquired from 10.230.15.169 Dec 13 05:54:31.399980 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Dec 13 05:54:31.416519 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 05:54:31.422556 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 05:54:31.432031 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 05:54:31.439563 kernel: ACPI: button: Power Button [PWRF] Dec 13 05:54:31.441866 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 05:54:31.474263 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 05:54:31.492496 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 05:54:31.506519 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 05:54:31.506583 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 05:54:31.507279 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 05:54:31.564874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:54:32.059087 systemd-resolved[1380]: Clock change detected. Flushing caches. Dec 13 05:54:32.060099 systemd-timesyncd[1399]: Contacted time server 212.82.85.226:123 (0.flatcar.pool.ntp.org). Dec 13 05:54:32.060504 systemd-timesyncd[1399]: Initial clock synchronization to Fri 2024-12-13 05:54:32.058919 UTC. Dec 13 05:54:32.192768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:54:32.196896 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 05:54:32.204423 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 05:54:32.231230 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 05:54:32.266307 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 05:54:32.267539 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:54:32.268418 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 05:54:32.269379 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 05:54:32.270254 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 05:54:32.271503 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 05:54:32.272397 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 05:54:32.273226 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 05:54:32.274008 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 05:54:32.274066 systemd[1]: Reached target paths.target - Path Units. Dec 13 05:54:32.274726 systemd[1]: Reached target timers.target - Timer Units. Dec 13 05:54:32.277361 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 05:54:32.281075 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 05:54:32.286975 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 05:54:32.289706 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 05:54:32.291231 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 05:54:32.292149 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 05:54:32.292866 systemd[1]: Reached target basic.target - Basic System. Dec 13 05:54:32.293642 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 05:54:32.293703 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 05:54:32.300276 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 05:54:32.308391 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 05:54:32.308908 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 05:54:32.310864 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 05:54:32.314266 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 05:54:32.322357 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 05:54:32.323202 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 05:54:32.326295 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 05:54:32.335237 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 05:54:32.345750 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 05:54:32.369411 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 05:54:32.383395 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 05:54:32.388481 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 05:54:32.389309 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 05:54:32.397429 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 05:54:32.401692 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 05:54:32.402924 dbus-daemon[1473]: [system] SELinux support is enabled Dec 13 05:54:32.406004 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 05:54:32.412264 dbus-daemon[1473]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1415 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 05:54:32.414737 jq[1474]: false Dec 13 05:54:32.420563 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 05:54:32.432774 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 05:54:32.433162 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 05:54:32.433746 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 05:54:32.435765 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 05:54:32.440889 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 05:54:32.441240 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 05:54:32.469938 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 05:54:32.471071 dbus-daemon[1473]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 05:54:32.470010 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 05:54:32.471077 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 05:54:32.471110 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 05:54:32.486420 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 05:54:32.486997 jq[1487]: true Dec 13 05:54:32.496601 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 05:54:32.498851 extend-filesystems[1475]: Found loop4 Dec 13 05:54:32.498851 extend-filesystems[1475]: Found loop5 Dec 13 05:54:32.498851 extend-filesystems[1475]: Found loop6 Dec 13 05:54:32.498851 extend-filesystems[1475]: Found loop7 Dec 13 05:54:32.498851 extend-filesystems[1475]: Found vda Dec 13 05:54:32.498851 extend-filesystems[1475]: Found vda1 Dec 13 05:54:32.498851 extend-filesystems[1475]: Found vda2 Dec 13 05:54:32.498851 extend-filesystems[1475]: Found vda3 Dec 13 05:54:32.498851 extend-filesystems[1475]: Found usr Dec 13 05:54:32.498851 extend-filesystems[1475]: Found vda4 Dec 13 05:54:32.498851 extend-filesystems[1475]: Found vda6 Dec 13 05:54:32.498851 extend-filesystems[1475]: Found vda7 Dec 13 05:54:32.498851 extend-filesystems[1475]: Found vda9 Dec 13 05:54:32.498851 extend-filesystems[1475]: Checking size of /dev/vda9 Dec 13 05:54:32.604524 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 05:54:32.604800 update_engine[1483]: I20241213 05:54:32.500212 1483 main.cc:92] Flatcar Update Engine starting Dec 13 05:54:32.604800 update_engine[1483]: I20241213 05:54:32.512203 1483 update_check_scheduler.cc:74] Next update check in 9m11s Dec 13 05:54:32.512147 systemd[1]: Started update-engine.service - Update Engine. Dec 13 05:54:32.610612 extend-filesystems[1475]: Resized partition /dev/vda9 Dec 13 05:54:32.616561 tar[1495]: linux-amd64/helm Dec 13 05:54:32.523777 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 05:54:32.618304 extend-filesystems[1515]: resize2fs 1.47.1 (20-May-2024) Dec 13 05:54:32.628061 jq[1506]: true Dec 13 05:54:32.551839 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 05:54:32.634157 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1420) Dec 13 05:54:32.711965 systemd-logind[1482]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 05:54:32.712617 systemd-logind[1482]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 05:54:32.716982 systemd-logind[1482]: New seat seat0. Dec 13 05:54:32.720529 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 05:54:32.839116 dbus-daemon[1473]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 05:54:32.839355 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 05:54:32.840933 dbus-daemon[1473]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1505 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 05:54:32.850470 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 05:54:32.862098 bash[1535]: Updated "/home/core/.ssh/authorized_keys" Dec 13 05:54:32.865761 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 05:54:32.867530 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 05:54:32.872827 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 05:54:32.879230 systemd[1]: Starting sshkeys.service... Dec 13 05:54:32.891775 polkitd[1541]: Started polkitd version 121 Dec 13 05:54:32.911036 polkitd[1541]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 05:54:32.911978 polkitd[1541]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 05:54:32.920261 polkitd[1541]: Finished loading, compiling and executing 2 rules Dec 13 05:54:32.926869 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 05:54:32.949014 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 05:54:32.930641 dbus-daemon[1473]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 05:54:32.937230 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 05:54:32.948576 polkitd[1541]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 05:54:32.939446 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 05:54:32.958236 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 05:54:32.970562 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 05:54:32.976007 extend-filesystems[1515]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 05:54:32.976007 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 05:54:32.976007 extend-filesystems[1515]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 05:54:32.985395 extend-filesystems[1475]: Resized filesystem in /dev/vda9 Dec 13 05:54:32.983896 systemd[1]: Started sshd@0-10.230.15.170:22-147.75.109.163:39818.service - OpenSSH per-connection server daemon (147.75.109.163:39818). Dec 13 05:54:32.988399 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 05:54:32.988692 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 05:54:33.003890 systemd-hostnamed[1505]: Hostname set to (static) Dec 13 05:54:33.015590 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 05:54:33.024762 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 05:54:33.035517 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 05:54:33.090793 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 05:54:33.098444 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 05:54:33.109704 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 05:54:33.113090 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 05:54:33.132991 containerd[1496]: time="2024-12-13T05:54:33.132871448Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 05:54:33.172752 containerd[1496]: time="2024-12-13T05:54:33.172696283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:54:33.175446 containerd[1496]: time="2024-12-13T05:54:33.175403094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:54:33.175509 containerd[1496]: time="2024-12-13T05:54:33.175459880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 05:54:33.175509 containerd[1496]: time="2024-12-13T05:54:33.175492671Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 05:54:33.175779 containerd[1496]: time="2024-12-13T05:54:33.175749325Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 05:54:33.175865 containerd[1496]: time="2024-12-13T05:54:33.175789642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 05:54:33.175932 containerd[1496]: time="2024-12-13T05:54:33.175903402Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:54:33.175975 containerd[1496]: time="2024-12-13T05:54:33.175934369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:54:33.176216 containerd[1496]: time="2024-12-13T05:54:33.176185370Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:54:33.176269 containerd[1496]: time="2024-12-13T05:54:33.176216711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 05:54:33.176269 containerd[1496]: time="2024-12-13T05:54:33.176238627Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:54:33.176269 containerd[1496]: time="2024-12-13T05:54:33.176256269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 05:54:33.176450 containerd[1496]: time="2024-12-13T05:54:33.176399678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:54:33.177019 containerd[1496]: time="2024-12-13T05:54:33.176786087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:54:33.177019 containerd[1496]: time="2024-12-13T05:54:33.176929850Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:54:33.177019 containerd[1496]: time="2024-12-13T05:54:33.176952897Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 05:54:33.177147 containerd[1496]: time="2024-12-13T05:54:33.177070123Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 05:54:33.177212 containerd[1496]: time="2024-12-13T05:54:33.177190106Z" level=info msg="metadata content store policy set" policy=shared Dec 13 05:54:33.181866 containerd[1496]: time="2024-12-13T05:54:33.180920343Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 05:54:33.181866 containerd[1496]: time="2024-12-13T05:54:33.180999885Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 05:54:33.181866 containerd[1496]: time="2024-12-13T05:54:33.181028445Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 05:54:33.181866 containerd[1496]: time="2024-12-13T05:54:33.181065729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 05:54:33.181866 containerd[1496]: time="2024-12-13T05:54:33.181109410Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 05:54:33.181866 containerd[1496]: time="2024-12-13T05:54:33.181329551Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 05:54:33.181866 containerd[1496]: time="2024-12-13T05:54:33.181621418Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 05:54:33.181866 containerd[1496]: time="2024-12-13T05:54:33.181808397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 05:54:33.181866 containerd[1496]: time="2024-12-13T05:54:33.181832923Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.181880142Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.181928231Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.181953817Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.181989694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.182011133Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.182033029Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.182052914Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.182071208Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.182094691Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.182152392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.182190322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182223 containerd[1496]: time="2024-12-13T05:54:33.182210462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182241129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182264073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182283812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182303672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182361489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182385409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182407027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182425724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182446230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182466567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182489514Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182529325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182552362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.182622 containerd[1496]: time="2024-12-13T05:54:33.182569559Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 05:54:33.183105 containerd[1496]: time="2024-12-13T05:54:33.182645769Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 05:54:33.183105 containerd[1496]: time="2024-12-13T05:54:33.182680203Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 05:54:33.183105 containerd[1496]: time="2024-12-13T05:54:33.182711402Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 05:54:33.183105 containerd[1496]: time="2024-12-13T05:54:33.182729022Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 05:54:33.183105 containerd[1496]: time="2024-12-13T05:54:33.182743572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.183105 containerd[1496]: time="2024-12-13T05:54:33.182760283Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 05:54:33.183105 containerd[1496]: time="2024-12-13T05:54:33.182781669Z" level=info msg="NRI interface is disabled by configuration." Dec 13 05:54:33.183105 containerd[1496]: time="2024-12-13T05:54:33.182797926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 05:54:33.183403 containerd[1496]: time="2024-12-13T05:54:33.183259146Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 05:54:33.183403 containerd[1496]: time="2024-12-13T05:54:33.183380117Z" level=info msg="Connect containerd service" Dec 13 05:54:33.183695 containerd[1496]: time="2024-12-13T05:54:33.183443227Z" level=info msg="using legacy CRI server" Dec 13 05:54:33.183695 containerd[1496]: time="2024-12-13T05:54:33.183460924Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 05:54:33.183695 containerd[1496]: time="2024-12-13T05:54:33.183638130Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 05:54:33.185375 containerd[1496]: time="2024-12-13T05:54:33.184640392Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 05:54:33.185375 containerd[1496]: time="2024-12-13T05:54:33.184994542Z" level=info msg="Start subscribing containerd event" Dec 13 05:54:33.185375 containerd[1496]: time="2024-12-13T05:54:33.185093030Z" level=info msg="Start recovering state" Dec 13 05:54:33.185375 containerd[1496]: time="2024-12-13T05:54:33.185299687Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 05:54:33.185554 containerd[1496]: time="2024-12-13T05:54:33.185448214Z" level=info msg="Start event monitor" Dec 13 05:54:33.187584 containerd[1496]: time="2024-12-13T05:54:33.185581995Z" level=info msg="Start snapshots syncer" Dec 13 05:54:33.187584 containerd[1496]: time="2024-12-13T05:54:33.185614952Z" level=info msg="Start cni network conf syncer for default" Dec 13 05:54:33.187584 containerd[1496]: time="2024-12-13T05:54:33.185630618Z" level=info msg="Start streaming server" Dec 13 05:54:33.187584 containerd[1496]: time="2024-12-13T05:54:33.185791401Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 05:54:33.186047 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 05:54:33.188407 containerd[1496]: time="2024-12-13T05:54:33.188369587Z" level=info msg="containerd successfully booted in 0.057536s" Dec 13 05:54:33.389386 systemd-networkd[1415]: eth0: Gained IPv6LL Dec 13 05:54:33.401740 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 05:54:33.406151 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 05:54:33.419344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:54:33.423262 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 05:54:33.477970 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 05:54:33.528381 tar[1495]: linux-amd64/LICENSE Dec 13 05:54:33.528650 tar[1495]: linux-amd64/README.md Dec 13 05:54:33.542721 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 05:54:33.932656 sshd[1563]: Accepted publickey for core from 147.75.109.163 port 39818 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:54:33.937651 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:54:33.956481 systemd-logind[1482]: New session 1 of user core. Dec 13 05:54:33.959824 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 05:54:33.971918 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 05:54:33.995653 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 05:54:34.005719 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 05:54:34.020070 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 05:54:34.164361 systemd[1597]: Queued start job for default target default.target. Dec 13 05:54:34.181727 systemd[1597]: Created slice app.slice - User Application Slice. Dec 13 05:54:34.181781 systemd[1597]: Reached target paths.target - Paths. Dec 13 05:54:34.181803 systemd[1597]: Reached target timers.target - Timers. Dec 13 05:54:34.184083 systemd[1597]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 05:54:34.216401 systemd[1597]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 05:54:34.216798 systemd[1597]: Reached target sockets.target - Sockets. Dec 13 05:54:34.216945 systemd[1597]: Reached target basic.target - Basic System. Dec 13 05:54:34.217026 systemd[1597]: Reached target default.target - Main User Target. Dec 13 05:54:34.217090 systemd[1597]: Startup finished in 182ms. Dec 13 05:54:34.217537 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 05:54:34.228463 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 05:54:34.359631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:54:34.376687 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:54:34.793985 systemd-networkd[1415]: eth0: Ignoring DHCPv6 address 2a02:1348:179:83ea:24:19ff:fee6:faa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:83ea:24:19ff:fee6:faa/64 assigned by NDisc. Dec 13 05:54:34.793998 systemd-networkd[1415]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 05:54:34.868623 systemd[1]: Started sshd@1-10.230.15.170:22-147.75.109.163:39832.service - OpenSSH per-connection server daemon (147.75.109.163:39832). Dec 13 05:54:35.016694 kubelet[1612]: E1213 05:54:35.016576 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:54:35.019552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:54:35.019811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:54:35.758295 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 39832 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:54:35.760690 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:54:35.769481 systemd-logind[1482]: New session 2 of user core. Dec 13 05:54:35.780450 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 05:54:36.380639 sshd[1620]: pam_unix(sshd:session): session closed for user core Dec 13 05:54:36.386159 systemd[1]: sshd@1-10.230.15.170:22-147.75.109.163:39832.service: Deactivated successfully. Dec 13 05:54:36.389417 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 05:54:36.390614 systemd-logind[1482]: Session 2 logged out. Waiting for processes to exit. Dec 13 05:54:36.391995 systemd-logind[1482]: Removed session 2. Dec 13 05:54:36.543683 systemd[1]: Started sshd@2-10.230.15.170:22-147.75.109.163:38522.service - OpenSSH per-connection server daemon (147.75.109.163:38522). Dec 13 05:54:37.429155 sshd[1632]: Accepted publickey for core from 147.75.109.163 port 38522 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:54:37.431398 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:54:37.439438 systemd-logind[1482]: New session 3 of user core. Dec 13 05:54:37.445465 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 05:54:38.050965 sshd[1632]: pam_unix(sshd:session): session closed for user core Dec 13 05:54:38.054999 systemd[1]: sshd@2-10.230.15.170:22-147.75.109.163:38522.service: Deactivated successfully. Dec 13 05:54:38.057776 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 05:54:38.059839 systemd-logind[1482]: Session 3 logged out. Waiting for processes to exit. Dec 13 05:54:38.061536 systemd-logind[1482]: Removed session 3. Dec 13 05:54:38.189756 login[1575]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 13 05:54:38.197493 login[1574]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 05:54:38.205513 systemd-logind[1482]: New session 5 of user core. Dec 13 05:54:38.215476 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 05:54:39.193078 login[1575]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 05:54:39.200608 systemd-logind[1482]: New session 4 of user core. Dec 13 05:54:39.218521 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 05:54:39.494950 coreos-metadata[1472]: Dec 13 05:54:39.494 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:54:39.523099 coreos-metadata[1472]: Dec 13 05:54:39.522 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 05:54:39.529375 coreos-metadata[1472]: Dec 13 05:54:39.529 INFO Fetch failed with 404: resource not found Dec 13 05:54:39.529375 coreos-metadata[1472]: Dec 13 05:54:39.529 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 05:54:39.530099 coreos-metadata[1472]: Dec 13 05:54:39.530 INFO Fetch successful Dec 13 05:54:39.530269 coreos-metadata[1472]: Dec 13 05:54:39.530 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 05:54:39.542693 coreos-metadata[1472]: Dec 13 05:54:39.542 INFO Fetch successful Dec 13 05:54:39.542958 coreos-metadata[1472]: Dec 13 05:54:39.542 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 05:54:39.558367 coreos-metadata[1472]: Dec 13 05:54:39.558 INFO Fetch successful Dec 13 05:54:39.558367 coreos-metadata[1472]: Dec 13 05:54:39.558 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 05:54:39.575486 coreos-metadata[1472]: Dec 13 05:54:39.575 INFO Fetch successful Dec 13 05:54:39.575486 coreos-metadata[1472]: Dec 13 05:54:39.575 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 05:54:39.592220 coreos-metadata[1472]: Dec 13 05:54:39.592 INFO Fetch successful Dec 13 05:54:39.642352 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 05:54:39.643643 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 05:54:40.069698 coreos-metadata[1558]: Dec 13 05:54:40.069 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:54:40.092387 coreos-metadata[1558]: Dec 13 05:54:40.092 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 05:54:40.118521 coreos-metadata[1558]: Dec 13 05:54:40.118 INFO Fetch successful Dec 13 05:54:40.118604 coreos-metadata[1558]: Dec 13 05:54:40.118 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 05:54:40.153790 coreos-metadata[1558]: Dec 13 05:54:40.153 INFO Fetch successful Dec 13 05:54:40.155896 unknown[1558]: wrote ssh authorized keys file for user: core Dec 13 05:54:40.182425 update-ssh-keys[1671]: Updated "/home/core/.ssh/authorized_keys" Dec 13 05:54:40.183616 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 05:54:40.186436 systemd[1]: Finished sshkeys.service. Dec 13 05:54:40.188019 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 05:54:40.190291 systemd[1]: Startup finished in 1.353s (kernel) + 14.246s (initrd) + 11.840s (userspace) = 27.440s. Dec 13 05:54:45.035357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 05:54:45.049515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:54:45.266717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:54:45.279633 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:54:45.349928 kubelet[1682]: E1213 05:54:45.349627 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:54:45.355324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:54:45.355791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:54:48.208503 systemd[1]: Started sshd@3-10.230.15.170:22-147.75.109.163:42096.service - OpenSSH per-connection server daemon (147.75.109.163:42096). Dec 13 05:54:49.152181 sshd[1691]: Accepted publickey for core from 147.75.109.163 port 42096 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:54:49.154644 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:54:49.161644 systemd-logind[1482]: New session 6 of user core. Dec 13 05:54:49.173332 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 05:54:49.773793 sshd[1691]: pam_unix(sshd:session): session closed for user core Dec 13 05:54:49.778385 systemd[1]: sshd@3-10.230.15.170:22-147.75.109.163:42096.service: Deactivated successfully. Dec 13 05:54:49.781174 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 05:54:49.782977 systemd-logind[1482]: Session 6 logged out. Waiting for processes to exit. Dec 13 05:54:49.784502 systemd-logind[1482]: Removed session 6. Dec 13 05:54:49.937590 systemd[1]: Started sshd@4-10.230.15.170:22-147.75.109.163:42110.service - OpenSSH per-connection server daemon (147.75.109.163:42110). Dec 13 05:54:50.820616 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 42110 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:54:50.822854 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:54:50.829060 systemd-logind[1482]: New session 7 of user core. Dec 13 05:54:50.840323 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 05:54:51.433610 sshd[1698]: pam_unix(sshd:session): session closed for user core Dec 13 05:54:51.438467 systemd[1]: sshd@4-10.230.15.170:22-147.75.109.163:42110.service: Deactivated successfully. Dec 13 05:54:51.440466 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 05:54:51.441281 systemd-logind[1482]: Session 7 logged out. Waiting for processes to exit. Dec 13 05:54:51.442685 systemd-logind[1482]: Removed session 7. Dec 13 05:54:51.597523 systemd[1]: Started sshd@5-10.230.15.170:22-147.75.109.163:42120.service - OpenSSH per-connection server daemon (147.75.109.163:42120). Dec 13 05:54:52.485769 sshd[1705]: Accepted publickey for core from 147.75.109.163 port 42120 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:54:52.487833 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:54:52.494931 systemd-logind[1482]: New session 8 of user core. Dec 13 05:54:52.501359 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 05:54:53.109602 sshd[1705]: pam_unix(sshd:session): session closed for user core Dec 13 05:54:53.113861 systemd[1]: sshd@5-10.230.15.170:22-147.75.109.163:42120.service: Deactivated successfully. Dec 13 05:54:53.116249 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 05:54:53.118424 systemd-logind[1482]: Session 8 logged out. Waiting for processes to exit. Dec 13 05:54:53.119825 systemd-logind[1482]: Removed session 8. Dec 13 05:54:53.263279 systemd[1]: Started sshd@6-10.230.15.170:22-147.75.109.163:42134.service - OpenSSH per-connection server daemon (147.75.109.163:42134). Dec 13 05:54:54.162551 sshd[1712]: Accepted publickey for core from 147.75.109.163 port 42134 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:54:54.165765 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:54:54.173546 systemd-logind[1482]: New session 9 of user core. Dec 13 05:54:54.183362 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 05:54:54.653923 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 05:54:54.654436 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:54:54.671403 sudo[1715]: pam_unix(sudo:session): session closed for user root Dec 13 05:54:54.815830 sshd[1712]: pam_unix(sshd:session): session closed for user core Dec 13 05:54:54.822640 systemd[1]: sshd@6-10.230.15.170:22-147.75.109.163:42134.service: Deactivated successfully. Dec 13 05:54:54.825154 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 05:54:54.826050 systemd-logind[1482]: Session 9 logged out. Waiting for processes to exit. Dec 13 05:54:54.827983 systemd-logind[1482]: Removed session 9. Dec 13 05:54:54.982592 systemd[1]: Started sshd@7-10.230.15.170:22-147.75.109.163:42144.service - OpenSSH per-connection server daemon (147.75.109.163:42144). Dec 13 05:54:55.535180 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 05:54:55.546416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:54:55.686469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:54:55.709087 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:54:55.802232 kubelet[1730]: E1213 05:54:55.801827 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:54:55.807069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:54:55.807483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:54:55.862827 sshd[1720]: Accepted publickey for core from 147.75.109.163 port 42144 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:54:55.865489 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:54:55.874139 systemd-logind[1482]: New session 10 of user core. Dec 13 05:54:55.884445 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 05:54:56.340622 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 05:54:56.341194 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:54:56.347591 sudo[1738]: pam_unix(sudo:session): session closed for user root Dec 13 05:54:56.356645 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 05:54:56.357499 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:54:56.375917 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 05:54:56.390291 auditctl[1741]: No rules Dec 13 05:54:56.392440 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 05:54:56.392841 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 05:54:56.399610 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 05:54:56.448841 augenrules[1759]: No rules Dec 13 05:54:56.449795 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 05:54:56.451883 sudo[1737]: pam_unix(sudo:session): session closed for user root Dec 13 05:54:56.595655 sshd[1720]: pam_unix(sshd:session): session closed for user core Dec 13 05:54:56.600763 systemd[1]: sshd@7-10.230.15.170:22-147.75.109.163:42144.service: Deactivated successfully. Dec 13 05:54:56.603574 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 05:54:56.605848 systemd-logind[1482]: Session 10 logged out. Waiting for processes to exit. Dec 13 05:54:56.607646 systemd-logind[1482]: Removed session 10. Dec 13 05:54:56.750394 systemd[1]: Started sshd@8-10.230.15.170:22-147.75.109.163:47718.service - OpenSSH per-connection server daemon (147.75.109.163:47718). Dec 13 05:54:57.650968 sshd[1767]: Accepted publickey for core from 147.75.109.163 port 47718 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:54:57.653265 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:54:57.661980 systemd-logind[1482]: New session 11 of user core. Dec 13 05:54:57.665343 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 05:54:58.129923 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 05:54:58.131160 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:54:58.617973 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 05:54:58.630721 (dockerd)[1786]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 05:54:59.103692 dockerd[1786]: time="2024-12-13T05:54:59.103551861Z" level=info msg="Starting up" Dec 13 05:54:59.248072 systemd[1]: var-lib-docker-metacopy\x2dcheck1392278723-merged.mount: Deactivated successfully. Dec 13 05:54:59.273524 dockerd[1786]: time="2024-12-13T05:54:59.273450260Z" level=info msg="Loading containers: start." Dec 13 05:54:59.433285 kernel: Initializing XFRM netlink socket Dec 13 05:54:59.542408 systemd-networkd[1415]: docker0: Link UP Dec 13 05:54:59.581469 dockerd[1786]: time="2024-12-13T05:54:59.581390438Z" level=info msg="Loading containers: done." Dec 13 05:54:59.604384 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1488091925-merged.mount: Deactivated successfully. Dec 13 05:54:59.606626 dockerd[1786]: time="2024-12-13T05:54:59.605735220Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 05:54:59.606626 dockerd[1786]: time="2024-12-13T05:54:59.605999147Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 05:54:59.606626 dockerd[1786]: time="2024-12-13T05:54:59.606214700Z" level=info msg="Daemon has completed initialization" Dec 13 05:54:59.653092 dockerd[1786]: time="2024-12-13T05:54:59.652968735Z" level=info msg="API listen on /run/docker.sock" Dec 13 05:54:59.653665 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 05:55:00.875612 containerd[1496]: time="2024-12-13T05:55:00.875460754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 05:55:01.691473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737540821.mount: Deactivated successfully. Dec 13 05:55:03.433156 containerd[1496]: time="2024-12-13T05:55:03.433049688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:03.434552 containerd[1496]: time="2024-12-13T05:55:03.434459301Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975491" Dec 13 05:55:03.435455 containerd[1496]: time="2024-12-13T05:55:03.435380631Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:03.439473 containerd[1496]: time="2024-12-13T05:55:03.439398494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:03.441412 containerd[1496]: time="2024-12-13T05:55:03.441047262Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.565452872s" Dec 13 05:55:03.441412 containerd[1496]: time="2024-12-13T05:55:03.441133865Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 05:55:03.444787 containerd[1496]: time="2024-12-13T05:55:03.444744598Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 05:55:04.842397 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 05:55:05.279875 containerd[1496]: time="2024-12-13T05:55:05.279733369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:05.281417 containerd[1496]: time="2024-12-13T05:55:05.281334905Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702165" Dec 13 05:55:05.283582 containerd[1496]: time="2024-12-13T05:55:05.282385133Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:05.286716 containerd[1496]: time="2024-12-13T05:55:05.286654253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:05.288296 containerd[1496]: time="2024-12-13T05:55:05.288103031Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.843305098s" Dec 13 05:55:05.288296 containerd[1496]: time="2024-12-13T05:55:05.288167801Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 05:55:05.288922 containerd[1496]: time="2024-12-13T05:55:05.288730545Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 05:55:06.035008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 05:55:06.045501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:55:06.222850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:55:06.235629 (kubelet)[1998]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:55:06.284854 kubelet[1998]: E1213 05:55:06.284771 1998 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:55:06.287723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:55:06.288186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:55:07.106802 containerd[1496]: time="2024-12-13T05:55:07.106734112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:07.108264 containerd[1496]: time="2024-12-13T05:55:07.108154398Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652075" Dec 13 05:55:07.108981 containerd[1496]: time="2024-12-13T05:55:07.108901134Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:07.114853 containerd[1496]: time="2024-12-13T05:55:07.113748578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:07.119818 containerd[1496]: time="2024-12-13T05:55:07.119755166Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.830983119s" Dec 13 05:55:07.120837 containerd[1496]: time="2024-12-13T05:55:07.120583719Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 05:55:07.121444 containerd[1496]: time="2024-12-13T05:55:07.121404333Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 05:55:08.648147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount694854658.mount: Deactivated successfully. Dec 13 05:55:09.517437 containerd[1496]: time="2024-12-13T05:55:09.517141168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:09.518423 containerd[1496]: time="2024-12-13T05:55:09.518294456Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230251" Dec 13 05:55:09.519267 containerd[1496]: time="2024-12-13T05:55:09.519193650Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:09.521979 containerd[1496]: time="2024-12-13T05:55:09.521917905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:09.523219 containerd[1496]: time="2024-12-13T05:55:09.523013949Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.401564605s" Dec 13 05:55:09.523219 containerd[1496]: time="2024-12-13T05:55:09.523062556Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 05:55:09.524006 containerd[1496]: time="2024-12-13T05:55:09.523812645Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 05:55:10.143672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243181668.mount: Deactivated successfully. Dec 13 05:55:11.263881 containerd[1496]: time="2024-12-13T05:55:11.263689511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:11.265243 containerd[1496]: time="2024-12-13T05:55:11.265187895Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 05:55:11.266263 containerd[1496]: time="2024-12-13T05:55:11.266191589Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:11.270296 containerd[1496]: time="2024-12-13T05:55:11.270216727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:11.272038 containerd[1496]: time="2024-12-13T05:55:11.271866669Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.748013419s" Dec 13 05:55:11.272038 containerd[1496]: time="2024-12-13T05:55:11.271914640Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 05:55:11.273594 containerd[1496]: time="2024-12-13T05:55:11.273550964Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 05:55:11.792453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3087331838.mount: Deactivated successfully. Dec 13 05:55:11.801261 containerd[1496]: time="2024-12-13T05:55:11.801204422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:11.802035 containerd[1496]: time="2024-12-13T05:55:11.801967773Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Dec 13 05:55:11.802948 containerd[1496]: time="2024-12-13T05:55:11.802902816Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:11.806086 containerd[1496]: time="2024-12-13T05:55:11.806025280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:11.807328 containerd[1496]: time="2024-12-13T05:55:11.807285736Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.688518ms" Dec 13 05:55:11.807464 containerd[1496]: time="2024-12-13T05:55:11.807340003Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 05:55:11.808152 containerd[1496]: time="2024-12-13T05:55:11.808000422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 05:55:12.422624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731709321.mount: Deactivated successfully. Dec 13 05:55:15.007879 containerd[1496]: time="2024-12-13T05:55:15.007816834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:15.010077 containerd[1496]: time="2024-12-13T05:55:15.009533515Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Dec 13 05:55:15.010077 containerd[1496]: time="2024-12-13T05:55:15.010022654Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:15.014719 containerd[1496]: time="2024-12-13T05:55:15.014618314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:15.017950 containerd[1496]: time="2024-12-13T05:55:15.016535283Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.208487118s" Dec 13 05:55:15.017950 containerd[1496]: time="2024-12-13T05:55:15.016581948Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 05:55:16.534961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 05:55:16.543512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:55:16.724334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:55:16.726956 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:55:16.810814 kubelet[2146]: E1213 05:55:16.810341 2146 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:55:16.813546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:55:16.813792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:55:17.765152 update_engine[1483]: I20241213 05:55:17.763648 1483 update_attempter.cc:509] Updating boot flags... Dec 13 05:55:17.881185 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2160) Dec 13 05:55:17.943149 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2161) Dec 13 05:55:19.372446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:55:19.387557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:55:19.421511 systemd[1]: Reloading requested from client PID 2174 ('systemctl') (unit session-11.scope)... Dec 13 05:55:19.421771 systemd[1]: Reloading... Dec 13 05:55:19.598310 zram_generator::config[2209]: No configuration found. Dec 13 05:55:19.785606 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:55:19.891339 systemd[1]: Reloading finished in 468 ms. Dec 13 05:55:19.972014 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:55:19.978361 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 05:55:19.978706 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:55:19.985568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:55:20.131748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:55:20.151997 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 05:55:20.224822 kubelet[2282]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:55:20.224822 kubelet[2282]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 05:55:20.224822 kubelet[2282]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:55:20.225428 kubelet[2282]: I1213 05:55:20.224857 2282 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 05:55:20.820150 kubelet[2282]: I1213 05:55:20.818899 2282 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 05:55:20.820150 kubelet[2282]: I1213 05:55:20.818942 2282 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 05:55:20.820150 kubelet[2282]: I1213 05:55:20.819322 2282 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 05:55:20.853917 kubelet[2282]: I1213 05:55:20.853861 2282 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:55:20.855673 kubelet[2282]: E1213 05:55:20.855556 2282 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.15.170:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.15.170:6443: connect: connection refused" logger="UnhandledError" Dec 13 05:55:20.865034 kubelet[2282]: E1213 05:55:20.864987 2282 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 05:55:20.865145 kubelet[2282]: I1213 05:55:20.865036 2282 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 05:55:20.873841 kubelet[2282]: I1213 05:55:20.873810 2282 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 05:55:20.875384 kubelet[2282]: I1213 05:55:20.875334 2282 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 05:55:20.875702 kubelet[2282]: I1213 05:55:20.875642 2282 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 05:55:20.875978 kubelet[2282]: I1213 05:55:20.875700 2282 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-kh3sk.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 05:55:20.876257 kubelet[2282]: I1213 05:55:20.875990 2282 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 05:55:20.876257 kubelet[2282]: I1213 05:55:20.876006 2282 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 05:55:20.876257 kubelet[2282]: I1213 05:55:20.876219 2282 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:55:20.878843 kubelet[2282]: I1213 05:55:20.878427 2282 kubelet.go:408] "Attempting to sync node with API server" Dec 13 05:55:20.878843 kubelet[2282]: I1213 05:55:20.878488 2282 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 05:55:20.878843 kubelet[2282]: I1213 05:55:20.878567 2282 kubelet.go:314] "Adding apiserver pod source" Dec 13 05:55:20.878843 kubelet[2282]: I1213 05:55:20.878607 2282 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 05:55:20.885294 kubelet[2282]: W1213 05:55:20.884135 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.15.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-kh3sk.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.15.170:6443: connect: connection refused Dec 13 05:55:20.885294 kubelet[2282]: E1213 05:55:20.884214 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.15.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-kh3sk.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.15.170:6443: connect: connection refused" logger="UnhandledError" Dec 13 05:55:20.885294 kubelet[2282]: W1213 05:55:20.884731 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.15.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.15.170:6443: connect: connection refused Dec 13 05:55:20.885294 kubelet[2282]: E1213 05:55:20.884781 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.15.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.15.170:6443: connect: connection refused" logger="UnhandledError" Dec 13 05:55:20.885294 kubelet[2282]: I1213 05:55:20.885039 2282 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 05:55:20.887454 kubelet[2282]: I1213 05:55:20.887265 2282 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 05:55:20.888960 kubelet[2282]: W1213 05:55:20.888207 2282 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 05:55:20.890545 kubelet[2282]: I1213 05:55:20.890329 2282 server.go:1269] "Started kubelet" Dec 13 05:55:20.891727 kubelet[2282]: I1213 05:55:20.891686 2282 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 05:55:20.894035 kubelet[2282]: I1213 05:55:20.893999 2282 server.go:460] "Adding debug handlers to kubelet server" Dec 13 05:55:20.897365 kubelet[2282]: I1213 05:55:20.895610 2282 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 05:55:20.897365 kubelet[2282]: I1213 05:55:20.897000 2282 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 05:55:20.907672 kubelet[2282]: I1213 05:55:20.907650 2282 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 05:55:20.908553 kubelet[2282]: E1213 05:55:20.902501 2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.15.170:6443/api/v1/namespaces/default/events\": dial tcp 10.230.15.170:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-kh3sk.gb1.brightbox.com.1810a6dc1f91bac6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-kh3sk.gb1.brightbox.com,UID:srv-kh3sk.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-kh3sk.gb1.brightbox.com,},FirstTimestamp:2024-12-13 05:55:20.890301126 +0000 UTC m=+0.733207900,LastTimestamp:2024-12-13 05:55:20.890301126 +0000 UTC m=+0.733207900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-kh3sk.gb1.brightbox.com,}" Dec 13 05:55:20.911657 kubelet[2282]: I1213 05:55:20.911349 2282 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 05:55:20.914287 kubelet[2282]: I1213 05:55:20.914264 2282 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 05:55:20.914677 kubelet[2282]: E1213 05:55:20.914649 2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-kh3sk.gb1.brightbox.com\" not found" Dec 13 05:55:20.920677 kubelet[2282]: I1213 05:55:20.920652 2282 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 05:55:20.920990 kubelet[2282]: I1213 05:55:20.920956 2282 reconciler.go:26] "Reconciler: start to sync state" Dec 13 05:55:20.921601 kubelet[2282]: W1213 05:55:20.921552 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.15.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.15.170:6443: connect: connection refused Dec 13 05:55:20.921930 kubelet[2282]: E1213 05:55:20.921739 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.15.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.15.170:6443: connect: connection refused" logger="UnhandledError" Dec 13 05:55:20.921930 kubelet[2282]: E1213 05:55:20.921865 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-kh3sk.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.170:6443: connect: connection refused" interval="200ms" Dec 13 05:55:20.923001 kubelet[2282]: I1213 05:55:20.922978 2282 factory.go:221] Registration of the systemd container factory successfully Dec 13 05:55:20.923234 kubelet[2282]: I1213 05:55:20.923210 2282 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 05:55:20.926166 kubelet[2282]: E1213 05:55:20.925664 2282 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 05:55:20.926166 kubelet[2282]: I1213 05:55:20.925902 2282 factory.go:221] Registration of the containerd container factory successfully Dec 13 05:55:20.946029 kubelet[2282]: I1213 05:55:20.945951 2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 05:55:20.947721 kubelet[2282]: I1213 05:55:20.947694 2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 05:55:20.947813 kubelet[2282]: I1213 05:55:20.947755 2282 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 05:55:20.947813 kubelet[2282]: I1213 05:55:20.947809 2282 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 05:55:20.947947 kubelet[2282]: E1213 05:55:20.947891 2282 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 05:55:20.953220 kubelet[2282]: W1213 05:55:20.952726 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.15.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.15.170:6443: connect: connection refused Dec 13 05:55:20.953862 kubelet[2282]: E1213 05:55:20.953835 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.15.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.15.170:6443: connect: connection refused" logger="UnhandledError" Dec 13 05:55:20.960826 kubelet[2282]: I1213 05:55:20.960485 2282 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 05:55:20.960826 kubelet[2282]: I1213 05:55:20.960508 2282 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 05:55:20.960826 kubelet[2282]: I1213 05:55:20.960538 2282 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:55:20.962638 kubelet[2282]: I1213 05:55:20.962611 2282 policy_none.go:49] "None policy: Start" Dec 13 05:55:20.963512 kubelet[2282]: I1213 05:55:20.963484 2282 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 05:55:20.963646 kubelet[2282]: I1213 05:55:20.963627 2282 state_mem.go:35] "Initializing new in-memory state store" Dec 13 05:55:20.973257 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 05:55:20.986879 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 05:55:21.001337 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 05:55:21.012744 kubelet[2282]: I1213 05:55:21.012698 2282 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 05:55:21.013008 kubelet[2282]: I1213 05:55:21.012976 2282 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 05:55:21.013088 kubelet[2282]: I1213 05:55:21.013011 2282 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 05:55:21.014664 kubelet[2282]: I1213 05:55:21.014105 2282 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 05:55:21.017094 kubelet[2282]: E1213 05:55:21.017054 2282 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-kh3sk.gb1.brightbox.com\" not found" Dec 13 05:55:21.066138 systemd[1]: Created slice kubepods-burstable-pode0e1d01dc1fe04bc69bee025ab5859e5.slice - libcontainer container kubepods-burstable-pode0e1d01dc1fe04bc69bee025ab5859e5.slice. Dec 13 05:55:21.091488 systemd[1]: Created slice kubepods-burstable-pod089f75e9e942cf54548bd80d554b2c93.slice - libcontainer container kubepods-burstable-pod089f75e9e942cf54548bd80d554b2c93.slice. Dec 13 05:55:21.105703 systemd[1]: Created slice kubepods-burstable-podc112ee210bf86c0589cd943a2e5d8132.slice - libcontainer container kubepods-burstable-podc112ee210bf86c0589cd943a2e5d8132.slice. Dec 13 05:55:21.116167 kubelet[2282]: I1213 05:55:21.116069 2282 kubelet_node_status.go:72] "Attempting to register node" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.116595 kubelet[2282]: E1213 05:55:21.116564 2282 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.15.170:6443/api/v1/nodes\": dial tcp 10.230.15.170:6443: connect: connection refused" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.122333 kubelet[2282]: I1213 05:55:21.122297 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/089f75e9e942cf54548bd80d554b2c93-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-kh3sk.gb1.brightbox.com\" (UID: \"089f75e9e942cf54548bd80d554b2c93\") " pod="kube-system/kube-controller-manager-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.122867 kubelet[2282]: I1213 05:55:21.122505 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c112ee210bf86c0589cd943a2e5d8132-kubeconfig\") pod \"kube-scheduler-srv-kh3sk.gb1.brightbox.com\" (UID: \"c112ee210bf86c0589cd943a2e5d8132\") " pod="kube-system/kube-scheduler-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.122867 kubelet[2282]: E1213 05:55:21.122562 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-kh3sk.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.170:6443: connect: connection refused" interval="400ms" Dec 13 05:55:21.122867 kubelet[2282]: I1213 05:55:21.122594 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0e1d01dc1fe04bc69bee025ab5859e5-k8s-certs\") pod \"kube-apiserver-srv-kh3sk.gb1.brightbox.com\" (UID: \"e0e1d01dc1fe04bc69bee025ab5859e5\") " pod="kube-system/kube-apiserver-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.122867 kubelet[2282]: I1213 05:55:21.122633 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/089f75e9e942cf54548bd80d554b2c93-ca-certs\") pod \"kube-controller-manager-srv-kh3sk.gb1.brightbox.com\" (UID: \"089f75e9e942cf54548bd80d554b2c93\") " pod="kube-system/kube-controller-manager-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.122867 kubelet[2282]: I1213 05:55:21.122661 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/089f75e9e942cf54548bd80d554b2c93-k8s-certs\") pod \"kube-controller-manager-srv-kh3sk.gb1.brightbox.com\" (UID: \"089f75e9e942cf54548bd80d554b2c93\") " pod="kube-system/kube-controller-manager-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.123161 kubelet[2282]: I1213 05:55:21.122696 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/089f75e9e942cf54548bd80d554b2c93-kubeconfig\") pod \"kube-controller-manager-srv-kh3sk.gb1.brightbox.com\" (UID: \"089f75e9e942cf54548bd80d554b2c93\") " pod="kube-system/kube-controller-manager-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.123161 kubelet[2282]: I1213 05:55:21.122721 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0e1d01dc1fe04bc69bee025ab5859e5-ca-certs\") pod \"kube-apiserver-srv-kh3sk.gb1.brightbox.com\" (UID: \"e0e1d01dc1fe04bc69bee025ab5859e5\") " pod="kube-system/kube-apiserver-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.123161 kubelet[2282]: I1213 05:55:21.122754 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0e1d01dc1fe04bc69bee025ab5859e5-usr-share-ca-certificates\") pod \"kube-apiserver-srv-kh3sk.gb1.brightbox.com\" (UID: \"e0e1d01dc1fe04bc69bee025ab5859e5\") " pod="kube-system/kube-apiserver-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.123161 kubelet[2282]: I1213 05:55:21.122818 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/089f75e9e942cf54548bd80d554b2c93-flexvolume-dir\") pod \"kube-controller-manager-srv-kh3sk.gb1.brightbox.com\" (UID: \"089f75e9e942cf54548bd80d554b2c93\") " pod="kube-system/kube-controller-manager-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.321252 kubelet[2282]: I1213 05:55:21.320768 2282 kubelet_node_status.go:72] "Attempting to register node" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.321801 kubelet[2282]: E1213 05:55:21.321276 2282 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.15.170:6443/api/v1/nodes\": dial tcp 10.230.15.170:6443: connect: connection refused" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.392776 containerd[1496]: time="2024-12-13T05:55:21.392350394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-kh3sk.gb1.brightbox.com,Uid:e0e1d01dc1fe04bc69bee025ab5859e5,Namespace:kube-system,Attempt:0,}" Dec 13 05:55:21.409478 containerd[1496]: time="2024-12-13T05:55:21.409046994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-kh3sk.gb1.brightbox.com,Uid:c112ee210bf86c0589cd943a2e5d8132,Namespace:kube-system,Attempt:0,}" Dec 13 05:55:21.409478 containerd[1496]: time="2024-12-13T05:55:21.409049151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-kh3sk.gb1.brightbox.com,Uid:089f75e9e942cf54548bd80d554b2c93,Namespace:kube-system,Attempt:0,}" Dec 13 05:55:21.523647 kubelet[2282]: E1213 05:55:21.523586 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-kh3sk.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.170:6443: connect: connection refused" interval="800ms" Dec 13 05:55:21.725369 kubelet[2282]: I1213 05:55:21.724685 2282 kubelet_node_status.go:72] "Attempting to register node" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.726306 kubelet[2282]: E1213 05:55:21.726260 2282 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.15.170:6443/api/v1/nodes\": dial tcp 10.230.15.170:6443: connect: connection refused" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:21.858085 kubelet[2282]: W1213 05:55:21.857982 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.15.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-kh3sk.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.15.170:6443: connect: connection refused Dec 13 05:55:21.858280 kubelet[2282]: E1213 05:55:21.858113 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.15.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-kh3sk.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.15.170:6443: connect: connection refused" logger="UnhandledError" Dec 13 05:55:22.022768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3306382431.mount: Deactivated successfully. Dec 13 05:55:22.069375 kubelet[2282]: W1213 05:55:22.069271 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.15.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.15.170:6443: connect: connection refused Dec 13 05:55:22.069375 kubelet[2282]: E1213 05:55:22.069337 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.15.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.15.170:6443: connect: connection refused" logger="UnhandledError" Dec 13 05:55:22.077652 containerd[1496]: time="2024-12-13T05:55:22.076315426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:55:22.079480 containerd[1496]: time="2024-12-13T05:55:22.079425796Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:55:22.081728 containerd[1496]: time="2024-12-13T05:55:22.081679258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 05:55:22.082688 containerd[1496]: time="2024-12-13T05:55:22.082633241Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 05:55:22.083817 containerd[1496]: time="2024-12-13T05:55:22.083771677Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:55:22.085694 containerd[1496]: time="2024-12-13T05:55:22.085631821Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:55:22.085694 containerd[1496]: time="2024-12-13T05:55:22.086052052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 05:55:22.088869 containerd[1496]: time="2024-12-13T05:55:22.088833164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:55:22.093084 containerd[1496]: time="2024-12-13T05:55:22.093040703Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 700.551387ms" Dec 13 05:55:22.096933 containerd[1496]: time="2024-12-13T05:55:22.096775152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 687.091233ms" Dec 13 05:55:22.097892 containerd[1496]: time="2024-12-13T05:55:22.097832861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 688.64854ms" Dec 13 05:55:22.102970 kubelet[2282]: W1213 05:55:22.102924 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.15.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.15.170:6443: connect: connection refused Dec 13 05:55:22.103155 kubelet[2282]: E1213 05:55:22.102982 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.15.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.15.170:6443: connect: connection refused" logger="UnhandledError" Dec 13 05:55:22.168955 kubelet[2282]: E1213 05:55:22.168718 2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.15.170:6443/api/v1/namespaces/default/events\": dial tcp 10.230.15.170:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-kh3sk.gb1.brightbox.com.1810a6dc1f91bac6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-kh3sk.gb1.brightbox.com,UID:srv-kh3sk.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-kh3sk.gb1.brightbox.com,},FirstTimestamp:2024-12-13 05:55:20.890301126 +0000 UTC m=+0.733207900,LastTimestamp:2024-12-13 05:55:20.890301126 +0000 UTC m=+0.733207900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-kh3sk.gb1.brightbox.com,}" Dec 13 05:55:22.250846 kubelet[2282]: W1213 05:55:22.250676 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.15.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.15.170:6443: connect: connection refused Dec 13 05:55:22.250846 kubelet[2282]: E1213 05:55:22.250785 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.15.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.15.170:6443: connect: connection refused" logger="UnhandledError" Dec 13 05:55:22.325931 containerd[1496]: time="2024-12-13T05:55:22.324047134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:55:22.325931 containerd[1496]: time="2024-12-13T05:55:22.324294714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:55:22.327053 kubelet[2282]: E1213 05:55:22.326708 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-kh3sk.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.170:6443: connect: connection refused" interval="1.6s" Dec 13 05:55:22.328146 containerd[1496]: time="2024-12-13T05:55:22.326823085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:22.328146 containerd[1496]: time="2024-12-13T05:55:22.326983267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:22.351877 containerd[1496]: time="2024-12-13T05:55:22.351771176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:55:22.352297 containerd[1496]: time="2024-12-13T05:55:22.352214721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:55:22.354597 containerd[1496]: time="2024-12-13T05:55:22.354457523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:55:22.355737 containerd[1496]: time="2024-12-13T05:55:22.355611323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:22.355825 containerd[1496]: time="2024-12-13T05:55:22.355789939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:22.357508 containerd[1496]: time="2024-12-13T05:55:22.357282403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:55:22.357508 containerd[1496]: time="2024-12-13T05:55:22.357309407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:22.357508 containerd[1496]: time="2024-12-13T05:55:22.357399419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:22.371333 systemd[1]: Started cri-containerd-cd2505d342d0bec327f6122ab713f9d8a278af6fd74e96c737cabd525be16874.scope - libcontainer container cd2505d342d0bec327f6122ab713f9d8a278af6fd74e96c737cabd525be16874. Dec 13 05:55:22.403386 systemd[1]: Started cri-containerd-505f9e24fcd07a697011bef9bdcefed52ddc27ab7c4f0483af319235f04f8560.scope - libcontainer container 505f9e24fcd07a697011bef9bdcefed52ddc27ab7c4f0483af319235f04f8560. Dec 13 05:55:22.416508 systemd[1]: Started cri-containerd-6d1b2f9550cfba0930e66817758c99f8c342ea91bf99ae107a81bea96d9fc379.scope - libcontainer container 6d1b2f9550cfba0930e66817758c99f8c342ea91bf99ae107a81bea96d9fc379. Dec 13 05:55:22.484263 containerd[1496]: time="2024-12-13T05:55:22.484176504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-kh3sk.gb1.brightbox.com,Uid:c112ee210bf86c0589cd943a2e5d8132,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd2505d342d0bec327f6122ab713f9d8a278af6fd74e96c737cabd525be16874\"" Dec 13 05:55:22.494773 containerd[1496]: time="2024-12-13T05:55:22.494737404Z" level=info msg="CreateContainer within sandbox \"cd2505d342d0bec327f6122ab713f9d8a278af6fd74e96c737cabd525be16874\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 05:55:22.519747 containerd[1496]: time="2024-12-13T05:55:22.519322349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-kh3sk.gb1.brightbox.com,Uid:e0e1d01dc1fe04bc69bee025ab5859e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"505f9e24fcd07a697011bef9bdcefed52ddc27ab7c4f0483af319235f04f8560\"" Dec 13 05:55:22.525554 containerd[1496]: time="2024-12-13T05:55:22.525348078Z" level=info msg="CreateContainer within sandbox \"505f9e24fcd07a697011bef9bdcefed52ddc27ab7c4f0483af319235f04f8560\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 05:55:22.530370 containerd[1496]: time="2024-12-13T05:55:22.530104662Z" level=info msg="CreateContainer within sandbox \"cd2505d342d0bec327f6122ab713f9d8a278af6fd74e96c737cabd525be16874\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"520e630a3c9ba57c2c7c5b78ddc034c314ed6762d85cab53c011df14e53d8fc5\"" Dec 13 05:55:22.540298 containerd[1496]: time="2024-12-13T05:55:22.540240209Z" level=info msg="StartContainer for \"520e630a3c9ba57c2c7c5b78ddc034c314ed6762d85cab53c011df14e53d8fc5\"" Dec 13 05:55:22.541617 containerd[1496]: time="2024-12-13T05:55:22.541538443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-kh3sk.gb1.brightbox.com,Uid:089f75e9e942cf54548bd80d554b2c93,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d1b2f9550cfba0930e66817758c99f8c342ea91bf99ae107a81bea96d9fc379\"" Dec 13 05:55:22.542140 kubelet[2282]: I1213 05:55:22.542101 2282 kubelet_node_status.go:72] "Attempting to register node" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:22.542851 kubelet[2282]: E1213 05:55:22.542786 2282 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.15.170:6443/api/v1/nodes\": dial tcp 10.230.15.170:6443: connect: connection refused" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:22.543486 containerd[1496]: time="2024-12-13T05:55:22.543373771Z" level=info msg="CreateContainer within sandbox \"505f9e24fcd07a697011bef9bdcefed52ddc27ab7c4f0483af319235f04f8560\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a036144b03c623b99c98e4a757b56a908314327f98ef7394796a22d8136db56\"" Dec 13 05:55:22.546173 containerd[1496]: time="2024-12-13T05:55:22.545529061Z" level=info msg="StartContainer for \"6a036144b03c623b99c98e4a757b56a908314327f98ef7394796a22d8136db56\"" Dec 13 05:55:22.556085 containerd[1496]: time="2024-12-13T05:55:22.556017427Z" level=info msg="CreateContainer within sandbox \"6d1b2f9550cfba0930e66817758c99f8c342ea91bf99ae107a81bea96d9fc379\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 05:55:22.577673 containerd[1496]: time="2024-12-13T05:55:22.577536640Z" level=info msg="CreateContainer within sandbox \"6d1b2f9550cfba0930e66817758c99f8c342ea91bf99ae107a81bea96d9fc379\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b7a853c24c907ce6ee0d8959f23bd05bfd21b6fb81e48ebc3cd7b267a1c0e12e\"" Dec 13 05:55:22.581521 containerd[1496]: time="2024-12-13T05:55:22.581492386Z" level=info msg="StartContainer for \"b7a853c24c907ce6ee0d8959f23bd05bfd21b6fb81e48ebc3cd7b267a1c0e12e\"" Dec 13 05:55:22.604311 systemd[1]: Started cri-containerd-520e630a3c9ba57c2c7c5b78ddc034c314ed6762d85cab53c011df14e53d8fc5.scope - libcontainer container 520e630a3c9ba57c2c7c5b78ddc034c314ed6762d85cab53c011df14e53d8fc5. Dec 13 05:55:22.631663 systemd[1]: Started cri-containerd-6a036144b03c623b99c98e4a757b56a908314327f98ef7394796a22d8136db56.scope - libcontainer container 6a036144b03c623b99c98e4a757b56a908314327f98ef7394796a22d8136db56. Dec 13 05:55:22.663318 systemd[1]: Started cri-containerd-b7a853c24c907ce6ee0d8959f23bd05bfd21b6fb81e48ebc3cd7b267a1c0e12e.scope - libcontainer container b7a853c24c907ce6ee0d8959f23bd05bfd21b6fb81e48ebc3cd7b267a1c0e12e. Dec 13 05:55:22.733828 containerd[1496]: time="2024-12-13T05:55:22.733492835Z" level=info msg="StartContainer for \"6a036144b03c623b99c98e4a757b56a908314327f98ef7394796a22d8136db56\" returns successfully" Dec 13 05:55:22.745710 containerd[1496]: time="2024-12-13T05:55:22.745638090Z" level=info msg="StartContainer for \"520e630a3c9ba57c2c7c5b78ddc034c314ed6762d85cab53c011df14e53d8fc5\" returns successfully" Dec 13 05:55:22.762332 containerd[1496]: time="2024-12-13T05:55:22.762192968Z" level=info msg="StartContainer for \"b7a853c24c907ce6ee0d8959f23bd05bfd21b6fb81e48ebc3cd7b267a1c0e12e\" returns successfully" Dec 13 05:55:22.902244 kubelet[2282]: E1213 05:55:22.901056 2282 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.15.170:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.15.170:6443: connect: connection refused" logger="UnhandledError" Dec 13 05:55:24.150818 kubelet[2282]: I1213 05:55:24.149405 2282 kubelet_node_status.go:72] "Attempting to register node" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:25.320041 kubelet[2282]: E1213 05:55:25.319969 2282 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-kh3sk.gb1.brightbox.com\" not found" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:25.417172 kubelet[2282]: I1213 05:55:25.416608 2282 kubelet_node_status.go:75] "Successfully registered node" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:25.888150 kubelet[2282]: I1213 05:55:25.886564 2282 apiserver.go:52] "Watching apiserver" Dec 13 05:55:25.921304 kubelet[2282]: I1213 05:55:25.921248 2282 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 05:55:26.009385 kubelet[2282]: E1213 05:55:26.008885 2282 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-kh3sk.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:27.658703 systemd[1]: Reloading requested from client PID 2560 ('systemctl') (unit session-11.scope)... Dec 13 05:55:27.658733 systemd[1]: Reloading... Dec 13 05:55:27.741157 zram_generator::config[2595]: No configuration found. Dec 13 05:55:27.951987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:55:28.080799 systemd[1]: Reloading finished in 421 ms. Dec 13 05:55:28.145190 kubelet[2282]: I1213 05:55:28.144624 2282 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:55:28.144676 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:55:28.156899 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 05:55:28.157332 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:55:28.157440 systemd[1]: kubelet.service: Consumed 1.210s CPU time, 115.5M memory peak, 0B memory swap peak. Dec 13 05:55:28.167598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:55:28.332403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:55:28.343604 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 05:55:28.436845 kubelet[2663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:55:28.436845 kubelet[2663]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 05:55:28.436845 kubelet[2663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:55:28.437756 kubelet[2663]: I1213 05:55:28.436947 2663 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 05:55:28.450102 kubelet[2663]: I1213 05:55:28.450032 2663 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 05:55:28.450102 kubelet[2663]: I1213 05:55:28.450070 2663 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 05:55:28.450426 kubelet[2663]: I1213 05:55:28.450393 2663 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 05:55:28.454111 kubelet[2663]: I1213 05:55:28.454052 2663 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 05:55:28.457190 kubelet[2663]: I1213 05:55:28.457089 2663 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:55:28.463877 kubelet[2663]: E1213 05:55:28.463846 2663 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 05:55:28.464004 kubelet[2663]: I1213 05:55:28.463892 2663 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 05:55:28.468412 kubelet[2663]: I1213 05:55:28.468387 2663 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 05:55:28.470216 kubelet[2663]: I1213 05:55:28.468749 2663 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 05:55:28.470216 kubelet[2663]: I1213 05:55:28.468990 2663 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 05:55:28.470216 kubelet[2663]: I1213 05:55:28.469036 2663 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-kh3sk.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 05:55:28.470216 kubelet[2663]: I1213 05:55:28.469471 2663 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 05:55:28.470575 kubelet[2663]: I1213 05:55:28.469489 2663 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 05:55:28.470575 kubelet[2663]: I1213 05:55:28.469534 2663 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:55:28.470575 kubelet[2663]: I1213 05:55:28.469691 2663 kubelet.go:408] "Attempting to sync node with API server" Dec 13 05:55:28.470575 kubelet[2663]: I1213 05:55:28.469710 2663 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 05:55:28.470575 kubelet[2663]: I1213 05:55:28.469743 2663 kubelet.go:314] "Adding apiserver pod source" Dec 13 05:55:28.470575 kubelet[2663]: I1213 05:55:28.469758 2663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 05:55:28.474244 kubelet[2663]: I1213 05:55:28.474208 2663 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 05:55:28.476186 kubelet[2663]: I1213 05:55:28.475006 2663 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 05:55:28.481202 kubelet[2663]: I1213 05:55:28.481183 2663 server.go:1269] "Started kubelet" Dec 13 05:55:28.491068 kubelet[2663]: I1213 05:55:28.489586 2663 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 05:55:28.493207 kubelet[2663]: I1213 05:55:28.492323 2663 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 05:55:28.493484 kubelet[2663]: I1213 05:55:28.493459 2663 server.go:460] "Adding debug handlers to kubelet server" Dec 13 05:55:28.495200 kubelet[2663]: I1213 05:55:28.495126 2663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 05:55:28.495513 kubelet[2663]: I1213 05:55:28.495486 2663 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 05:55:28.495660 kubelet[2663]: I1213 05:55:28.490253 2663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 05:55:28.502735 kubelet[2663]: I1213 05:55:28.502705 2663 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 05:55:28.502959 kubelet[2663]: I1213 05:55:28.502909 2663 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 05:55:28.503418 kubelet[2663]: I1213 05:55:28.503397 2663 reconciler.go:26] "Reconciler: start to sync state" Dec 13 05:55:28.506531 kubelet[2663]: I1213 05:55:28.506509 2663 factory.go:221] Registration of the systemd container factory successfully Dec 13 05:55:28.506857 kubelet[2663]: I1213 05:55:28.506809 2663 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 05:55:28.509523 kubelet[2663]: E1213 05:55:28.509473 2663 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 05:55:28.510027 kubelet[2663]: I1213 05:55:28.510003 2663 factory.go:221] Registration of the containerd container factory successfully Dec 13 05:55:28.520840 kubelet[2663]: I1213 05:55:28.520784 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 05:55:28.522314 kubelet[2663]: I1213 05:55:28.522271 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 05:55:28.522314 kubelet[2663]: I1213 05:55:28.522318 2663 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 05:55:28.522452 kubelet[2663]: I1213 05:55:28.522344 2663 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 05:55:28.522452 kubelet[2663]: E1213 05:55:28.522417 2663 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 05:55:28.585311 kubelet[2663]: I1213 05:55:28.585175 2663 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 05:55:28.586577 kubelet[2663]: I1213 05:55:28.585200 2663 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 05:55:28.586577 kubelet[2663]: I1213 05:55:28.586563 2663 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:55:28.586795 kubelet[2663]: I1213 05:55:28.586760 2663 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 05:55:28.586858 kubelet[2663]: I1213 05:55:28.586795 2663 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 05:55:28.586858 kubelet[2663]: I1213 05:55:28.586824 2663 policy_none.go:49] "None policy: Start" Dec 13 05:55:28.588261 kubelet[2663]: I1213 05:55:28.588239 2663 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 05:55:28.589103 kubelet[2663]: I1213 05:55:28.588454 2663 state_mem.go:35] "Initializing new in-memory state store" Dec 13 05:55:28.589103 kubelet[2663]: I1213 05:55:28.588665 2663 state_mem.go:75] "Updated machine memory state" Dec 13 05:55:28.595746 kubelet[2663]: I1213 05:55:28.594989 2663 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 05:55:28.595746 kubelet[2663]: I1213 05:55:28.595234 2663 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 05:55:28.595746 kubelet[2663]: I1213 05:55:28.595252 2663 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 05:55:28.595918 kubelet[2663]: I1213 05:55:28.595876 2663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 05:55:28.634929 kubelet[2663]: W1213 05:55:28.633778 2663 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:55:28.635565 kubelet[2663]: W1213 05:55:28.635534 2663 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:55:28.635661 kubelet[2663]: W1213 05:55:28.635641 2663 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:55:28.704861 kubelet[2663]: I1213 05:55:28.704778 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0e1d01dc1fe04bc69bee025ab5859e5-k8s-certs\") pod \"kube-apiserver-srv-kh3sk.gb1.brightbox.com\" (UID: \"e0e1d01dc1fe04bc69bee025ab5859e5\") " pod="kube-system/kube-apiserver-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:28.705043 kubelet[2663]: I1213 05:55:28.704863 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/089f75e9e942cf54548bd80d554b2c93-flexvolume-dir\") pod \"kube-controller-manager-srv-kh3sk.gb1.brightbox.com\" (UID: \"089f75e9e942cf54548bd80d554b2c93\") " pod="kube-system/kube-controller-manager-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:28.705043 kubelet[2663]: I1213 05:55:28.704904 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/089f75e9e942cf54548bd80d554b2c93-kubeconfig\") pod \"kube-controller-manager-srv-kh3sk.gb1.brightbox.com\" (UID: \"089f75e9e942cf54548bd80d554b2c93\") " pod="kube-system/kube-controller-manager-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:28.705043 kubelet[2663]: I1213 05:55:28.704934 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/089f75e9e942cf54548bd80d554b2c93-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-kh3sk.gb1.brightbox.com\" (UID: \"089f75e9e942cf54548bd80d554b2c93\") " pod="kube-system/kube-controller-manager-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:28.705043 kubelet[2663]: I1213 05:55:28.704983 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c112ee210bf86c0589cd943a2e5d8132-kubeconfig\") pod \"kube-scheduler-srv-kh3sk.gb1.brightbox.com\" (UID: \"c112ee210bf86c0589cd943a2e5d8132\") " pod="kube-system/kube-scheduler-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:28.705043 kubelet[2663]: I1213 05:55:28.705013 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0e1d01dc1fe04bc69bee025ab5859e5-ca-certs\") pod \"kube-apiserver-srv-kh3sk.gb1.brightbox.com\" (UID: \"e0e1d01dc1fe04bc69bee025ab5859e5\") " pod="kube-system/kube-apiserver-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:28.705299 kubelet[2663]: I1213 05:55:28.705043 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0e1d01dc1fe04bc69bee025ab5859e5-usr-share-ca-certificates\") pod \"kube-apiserver-srv-kh3sk.gb1.brightbox.com\" (UID: \"e0e1d01dc1fe04bc69bee025ab5859e5\") " pod="kube-system/kube-apiserver-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:28.705299 kubelet[2663]: I1213 05:55:28.705069 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/089f75e9e942cf54548bd80d554b2c93-ca-certs\") pod \"kube-controller-manager-srv-kh3sk.gb1.brightbox.com\" (UID: \"089f75e9e942cf54548bd80d554b2c93\") " pod="kube-system/kube-controller-manager-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:28.705299 kubelet[2663]: I1213 05:55:28.705111 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/089f75e9e942cf54548bd80d554b2c93-k8s-certs\") pod \"kube-controller-manager-srv-kh3sk.gb1.brightbox.com\" (UID: \"089f75e9e942cf54548bd80d554b2c93\") " pod="kube-system/kube-controller-manager-srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:28.708146 kubelet[2663]: I1213 05:55:28.708065 2663 kubelet_node_status.go:72] "Attempting to register node" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:28.717199 kubelet[2663]: I1213 05:55:28.716893 2663 kubelet_node_status.go:111] "Node was previously registered" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:28.717199 kubelet[2663]: I1213 05:55:28.716985 2663 kubelet_node_status.go:75] "Successfully registered node" node="srv-kh3sk.gb1.brightbox.com" Dec 13 05:55:29.473226 kubelet[2663]: I1213 05:55:29.473180 2663 apiserver.go:52] "Watching apiserver" Dec 13 05:55:29.503348 kubelet[2663]: I1213 05:55:29.503289 2663 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 05:55:29.597268 kubelet[2663]: I1213 05:55:29.597176 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-kh3sk.gb1.brightbox.com" podStartSLOduration=1.597144433 podStartE2EDuration="1.597144433s" podCreationTimestamp="2024-12-13 05:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:55:29.595812309 +0000 UTC m=+1.223096817" watchObservedRunningTime="2024-12-13 05:55:29.597144433 +0000 UTC m=+1.224428955" Dec 13 05:55:29.612190 kubelet[2663]: I1213 05:55:29.611661 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-kh3sk.gb1.brightbox.com" podStartSLOduration=1.611643569 podStartE2EDuration="1.611643569s" podCreationTimestamp="2024-12-13 05:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:55:29.610166544 +0000 UTC m=+1.237451087" watchObservedRunningTime="2024-12-13 05:55:29.611643569 +0000 UTC m=+1.238928090" Dec 13 05:55:29.643543 kubelet[2663]: I1213 05:55:29.643459 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-kh3sk.gb1.brightbox.com" podStartSLOduration=1.643439131 podStartE2EDuration="1.643439131s" podCreationTimestamp="2024-12-13 05:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:55:29.621895718 +0000 UTC m=+1.249180244" watchObservedRunningTime="2024-12-13 05:55:29.643439131 +0000 UTC m=+1.270723650" Dec 13 05:55:33.818364 kubelet[2663]: I1213 05:55:33.818064 2663 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 05:55:33.819960 containerd[1496]: time="2024-12-13T05:55:33.819795912Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 05:55:33.822104 kubelet[2663]: I1213 05:55:33.820174 2663 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 05:55:33.979709 systemd[1]: Created slice kubepods-besteffort-pod2829b518_65b9_4b84_9fb8_0cbbe2499d1f.slice - libcontainer container kubepods-besteffort-pod2829b518_65b9_4b84_9fb8_0cbbe2499d1f.slice. Dec 13 05:55:34.143810 kubelet[2663]: I1213 05:55:34.143425 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2829b518-65b9-4b84-9fb8-0cbbe2499d1f-kube-proxy\") pod \"kube-proxy-j2lx7\" (UID: \"2829b518-65b9-4b84-9fb8-0cbbe2499d1f\") " pod="kube-system/kube-proxy-j2lx7" Dec 13 05:55:34.143810 kubelet[2663]: I1213 05:55:34.143480 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p48r\" (UniqueName: \"kubernetes.io/projected/2829b518-65b9-4b84-9fb8-0cbbe2499d1f-kube-api-access-9p48r\") pod \"kube-proxy-j2lx7\" (UID: \"2829b518-65b9-4b84-9fb8-0cbbe2499d1f\") " pod="kube-system/kube-proxy-j2lx7" Dec 13 05:55:34.143810 kubelet[2663]: I1213 05:55:34.143543 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2829b518-65b9-4b84-9fb8-0cbbe2499d1f-xtables-lock\") pod \"kube-proxy-j2lx7\" (UID: \"2829b518-65b9-4b84-9fb8-0cbbe2499d1f\") " pod="kube-system/kube-proxy-j2lx7" Dec 13 05:55:34.143810 kubelet[2663]: I1213 05:55:34.143571 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2829b518-65b9-4b84-9fb8-0cbbe2499d1f-lib-modules\") pod \"kube-proxy-j2lx7\" (UID: \"2829b518-65b9-4b84-9fb8-0cbbe2499d1f\") " pod="kube-system/kube-proxy-j2lx7" Dec 13 05:55:34.255674 kubelet[2663]: E1213 05:55:34.255271 2663 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 05:55:34.255674 kubelet[2663]: E1213 05:55:34.255315 2663 projected.go:194] Error preparing data for projected volume kube-api-access-9p48r for pod kube-system/kube-proxy-j2lx7: configmap "kube-root-ca.crt" not found Dec 13 05:55:34.255674 kubelet[2663]: E1213 05:55:34.255401 2663 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2829b518-65b9-4b84-9fb8-0cbbe2499d1f-kube-api-access-9p48r podName:2829b518-65b9-4b84-9fb8-0cbbe2499d1f nodeName:}" failed. No retries permitted until 2024-12-13 05:55:34.75537333 +0000 UTC m=+6.382657835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9p48r" (UniqueName: "kubernetes.io/projected/2829b518-65b9-4b84-9fb8-0cbbe2499d1f-kube-api-access-9p48r") pod "kube-proxy-j2lx7" (UID: "2829b518-65b9-4b84-9fb8-0cbbe2499d1f") : configmap "kube-root-ca.crt" not found Dec 13 05:55:34.865654 sudo[1770]: pam_unix(sudo:session): session closed for user root Dec 13 05:55:34.897980 containerd[1496]: time="2024-12-13T05:55:34.897422532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2lx7,Uid:2829b518-65b9-4b84-9fb8-0cbbe2499d1f,Namespace:kube-system,Attempt:0,}" Dec 13 05:55:34.907681 kubelet[2663]: W1213 05:55:34.907437 2663 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-kh3sk.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'srv-kh3sk.gb1.brightbox.com' and this object Dec 13 05:55:34.907681 kubelet[2663]: E1213 05:55:34.907508 2663 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-kh3sk.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-kh3sk.gb1.brightbox.com' and this object" logger="UnhandledError" Dec 13 05:55:34.907681 kubelet[2663]: W1213 05:55:34.907603 2663 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:srv-kh3sk.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'srv-kh3sk.gb1.brightbox.com' and this object Dec 13 05:55:34.907681 kubelet[2663]: E1213 05:55:34.907636 2663 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:srv-kh3sk.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-kh3sk.gb1.brightbox.com' and this object" logger="UnhandledError" Dec 13 05:55:34.911109 systemd[1]: Created slice kubepods-besteffort-poda3871426_d1dc_402e_8d28_0938fcf7e237.slice - libcontainer container kubepods-besteffort-poda3871426_d1dc_402e_8d28_0938fcf7e237.slice. Dec 13 05:55:34.952886 containerd[1496]: time="2024-12-13T05:55:34.952669379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:55:34.952886 containerd[1496]: time="2024-12-13T05:55:34.952755657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:55:34.952886 containerd[1496]: time="2024-12-13T05:55:34.952781431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:34.953301 containerd[1496]: time="2024-12-13T05:55:34.952943491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:34.991464 systemd[1]: Started cri-containerd-38793fe5fa2834e61e6e543fd0cad4f8fdabd5a6f5fc4953c4c386e1f2f6b88d.scope - libcontainer container 38793fe5fa2834e61e6e543fd0cad4f8fdabd5a6f5fc4953c4c386e1f2f6b88d. Dec 13 05:55:35.022749 sshd[1767]: pam_unix(sshd:session): session closed for user core Dec 13 05:55:35.029726 containerd[1496]: time="2024-12-13T05:55:35.029604041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2lx7,Uid:2829b518-65b9-4b84-9fb8-0cbbe2499d1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"38793fe5fa2834e61e6e543fd0cad4f8fdabd5a6f5fc4953c4c386e1f2f6b88d\"" Dec 13 05:55:35.031791 systemd[1]: sshd@8-10.230.15.170:22-147.75.109.163:47718.service: Deactivated successfully. Dec 13 05:55:35.035206 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 05:55:35.035841 systemd[1]: session-11.scope: Consumed 6.454s CPU time, 142.5M memory peak, 0B memory swap peak. Dec 13 05:55:35.037769 systemd-logind[1482]: Session 11 logged out. Waiting for processes to exit. Dec 13 05:55:35.040419 containerd[1496]: time="2024-12-13T05:55:35.039583193Z" level=info msg="CreateContainer within sandbox \"38793fe5fa2834e61e6e543fd0cad4f8fdabd5a6f5fc4953c4c386e1f2f6b88d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 05:55:35.041612 systemd-logind[1482]: Removed session 11. Dec 13 05:55:35.049505 kubelet[2663]: I1213 05:55:35.049284 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3871426-d1dc-402e-8d28-0938fcf7e237-var-lib-calico\") pod \"tigera-operator-76c4976dd7-dgpmn\" (UID: \"a3871426-d1dc-402e-8d28-0938fcf7e237\") " pod="tigera-operator/tigera-operator-76c4976dd7-dgpmn" Dec 13 05:55:35.049505 kubelet[2663]: I1213 05:55:35.049338 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bh5v\" (UniqueName: \"kubernetes.io/projected/a3871426-d1dc-402e-8d28-0938fcf7e237-kube-api-access-8bh5v\") pod \"tigera-operator-76c4976dd7-dgpmn\" (UID: \"a3871426-d1dc-402e-8d28-0938fcf7e237\") " pod="tigera-operator/tigera-operator-76c4976dd7-dgpmn" Dec 13 05:55:35.062973 containerd[1496]: time="2024-12-13T05:55:35.062895166Z" level=info msg="CreateContainer within sandbox \"38793fe5fa2834e61e6e543fd0cad4f8fdabd5a6f5fc4953c4c386e1f2f6b88d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c76008c116ea1c2b8fb9e06ad82f2be8f694dd21a9b709b052304d256c2ba992\"" Dec 13 05:55:35.065553 containerd[1496]: time="2024-12-13T05:55:35.065478733Z" level=info msg="StartContainer for \"c76008c116ea1c2b8fb9e06ad82f2be8f694dd21a9b709b052304d256c2ba992\"" Dec 13 05:55:35.107332 systemd[1]: Started cri-containerd-c76008c116ea1c2b8fb9e06ad82f2be8f694dd21a9b709b052304d256c2ba992.scope - libcontainer container c76008c116ea1c2b8fb9e06ad82f2be8f694dd21a9b709b052304d256c2ba992. Dec 13 05:55:35.153026 containerd[1496]: time="2024-12-13T05:55:35.152770465Z" level=info msg="StartContainer for \"c76008c116ea1c2b8fb9e06ad82f2be8f694dd21a9b709b052304d256c2ba992\" returns successfully" Dec 13 05:55:35.609573 kubelet[2663]: I1213 05:55:35.609367 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j2lx7" podStartSLOduration=2.609345916 podStartE2EDuration="2.609345916s" podCreationTimestamp="2024-12-13 05:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:55:35.607088666 +0000 UTC m=+7.234373195" watchObservedRunningTime="2024-12-13 05:55:35.609345916 +0000 UTC m=+7.236630441" Dec 13 05:55:36.436613 containerd[1496]: time="2024-12-13T05:55:36.436557326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-dgpmn,Uid:a3871426-d1dc-402e-8d28-0938fcf7e237,Namespace:tigera-operator,Attempt:0,}" Dec 13 05:55:36.475374 containerd[1496]: time="2024-12-13T05:55:36.475096720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:55:36.476577 containerd[1496]: time="2024-12-13T05:55:36.476374185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:55:36.476577 containerd[1496]: time="2024-12-13T05:55:36.476451457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:36.477059 containerd[1496]: time="2024-12-13T05:55:36.476864389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:36.511408 systemd[1]: Started cri-containerd-a93086696865290d8ab715ad33b3c973052a52fb141918df8c205e0efff453c9.scope - libcontainer container a93086696865290d8ab715ad33b3c973052a52fb141918df8c205e0efff453c9. Dec 13 05:55:36.571506 containerd[1496]: time="2024-12-13T05:55:36.571446705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-dgpmn,Uid:a3871426-d1dc-402e-8d28-0938fcf7e237,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a93086696865290d8ab715ad33b3c973052a52fb141918df8c205e0efff453c9\"" Dec 13 05:55:36.574734 containerd[1496]: time="2024-12-13T05:55:36.574702130Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 05:55:38.857670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2615145211.mount: Deactivated successfully. Dec 13 05:55:39.637796 containerd[1496]: time="2024-12-13T05:55:39.637740441Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:39.639204 containerd[1496]: time="2024-12-13T05:55:39.639124628Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764301" Dec 13 05:55:39.640240 containerd[1496]: time="2024-12-13T05:55:39.640108401Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:39.643228 containerd[1496]: time="2024-12-13T05:55:39.643194647Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:39.644744 containerd[1496]: time="2024-12-13T05:55:39.644518500Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.069771047s" Dec 13 05:55:39.644744 containerd[1496]: time="2024-12-13T05:55:39.644574733Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 05:55:39.648771 containerd[1496]: time="2024-12-13T05:55:39.648576197Z" level=info msg="CreateContainer within sandbox \"a93086696865290d8ab715ad33b3c973052a52fb141918df8c205e0efff453c9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 05:55:39.662020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount826330139.mount: Deactivated successfully. Dec 13 05:55:39.665598 containerd[1496]: time="2024-12-13T05:55:39.665495561Z" level=info msg="CreateContainer within sandbox \"a93086696865290d8ab715ad33b3c973052a52fb141918df8c205e0efff453c9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"63bcb5f117dcf1671f6d04b4a60e4ce667edbbbf2a8b1e50b4b5ac59c0a7a016\"" Dec 13 05:55:39.667875 containerd[1496]: time="2024-12-13T05:55:39.667189328Z" level=info msg="StartContainer for \"63bcb5f117dcf1671f6d04b4a60e4ce667edbbbf2a8b1e50b4b5ac59c0a7a016\"" Dec 13 05:55:39.709320 systemd[1]: Started cri-containerd-63bcb5f117dcf1671f6d04b4a60e4ce667edbbbf2a8b1e50b4b5ac59c0a7a016.scope - libcontainer container 63bcb5f117dcf1671f6d04b4a60e4ce667edbbbf2a8b1e50b4b5ac59c0a7a016. Dec 13 05:55:39.744075 containerd[1496]: time="2024-12-13T05:55:39.744004382Z" level=info msg="StartContainer for \"63bcb5f117dcf1671f6d04b4a60e4ce667edbbbf2a8b1e50b4b5ac59c0a7a016\" returns successfully" Dec 13 05:55:41.492529 kubelet[2663]: I1213 05:55:41.492463 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-dgpmn" podStartSLOduration=4.419800617 podStartE2EDuration="7.492444162s" podCreationTimestamp="2024-12-13 05:55:34 +0000 UTC" firstStartedPulling="2024-12-13 05:55:36.573376987 +0000 UTC m=+8.200661498" lastFinishedPulling="2024-12-13 05:55:39.646020525 +0000 UTC m=+11.273305043" observedRunningTime="2024-12-13 05:55:40.623787007 +0000 UTC m=+12.251071542" watchObservedRunningTime="2024-12-13 05:55:41.492444162 +0000 UTC m=+13.119728681" Dec 13 05:55:43.074857 systemd[1]: Created slice kubepods-besteffort-pod5b9c8704_c260_47f8_a76e_c75efc5eb2ab.slice - libcontainer container kubepods-besteffort-pod5b9c8704_c260_47f8_a76e_c75efc5eb2ab.slice. Dec 13 05:55:43.102726 kubelet[2663]: I1213 05:55:43.102452 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b9c8704-c260-47f8-a76e-c75efc5eb2ab-typha-certs\") pod \"calico-typha-765c758d75-fx5x2\" (UID: \"5b9c8704-c260-47f8-a76e-c75efc5eb2ab\") " pod="calico-system/calico-typha-765c758d75-fx5x2" Dec 13 05:55:43.102726 kubelet[2663]: I1213 05:55:43.102535 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5b2h\" (UniqueName: \"kubernetes.io/projected/5b9c8704-c260-47f8-a76e-c75efc5eb2ab-kube-api-access-k5b2h\") pod \"calico-typha-765c758d75-fx5x2\" (UID: \"5b9c8704-c260-47f8-a76e-c75efc5eb2ab\") " pod="calico-system/calico-typha-765c758d75-fx5x2" Dec 13 05:55:43.102726 kubelet[2663]: I1213 05:55:43.102625 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b9c8704-c260-47f8-a76e-c75efc5eb2ab-tigera-ca-bundle\") pod \"calico-typha-765c758d75-fx5x2\" (UID: \"5b9c8704-c260-47f8-a76e-c75efc5eb2ab\") " pod="calico-system/calico-typha-765c758d75-fx5x2" Dec 13 05:55:43.193321 systemd[1]: Created slice kubepods-besteffort-pode8930a23_4cfe_4264_80f5_2611a2deca14.slice - libcontainer container kubepods-besteffort-pode8930a23_4cfe_4264_80f5_2611a2deca14.slice. Dec 13 05:55:43.203017 kubelet[2663]: I1213 05:55:43.202968 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e8930a23-4cfe-4264-80f5-2611a2deca14-cni-bin-dir\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.203230 kubelet[2663]: I1213 05:55:43.203068 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8930a23-4cfe-4264-80f5-2611a2deca14-lib-modules\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.203981 kubelet[2663]: I1213 05:55:43.203935 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e8930a23-4cfe-4264-80f5-2611a2deca14-node-certs\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.204072 kubelet[2663]: I1213 05:55:43.204021 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e8930a23-4cfe-4264-80f5-2611a2deca14-policysync\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.205160 kubelet[2663]: I1213 05:55:43.204109 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e8930a23-4cfe-4264-80f5-2611a2deca14-cni-log-dir\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.205160 kubelet[2663]: I1213 05:55:43.204204 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e8930a23-4cfe-4264-80f5-2611a2deca14-var-run-calico\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.205160 kubelet[2663]: I1213 05:55:43.204283 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e8930a23-4cfe-4264-80f5-2611a2deca14-cni-net-dir\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.205160 kubelet[2663]: I1213 05:55:43.204373 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8930a23-4cfe-4264-80f5-2611a2deca14-xtables-lock\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.205160 kubelet[2663]: I1213 05:55:43.204483 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e8930a23-4cfe-4264-80f5-2611a2deca14-flexvol-driver-host\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.205406 kubelet[2663]: I1213 05:55:43.204587 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6frrv\" (UniqueName: \"kubernetes.io/projected/e8930a23-4cfe-4264-80f5-2611a2deca14-kube-api-access-6frrv\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.205406 kubelet[2663]: I1213 05:55:43.204746 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8930a23-4cfe-4264-80f5-2611a2deca14-var-lib-calico\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.205406 kubelet[2663]: I1213 05:55:43.204793 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8930a23-4cfe-4264-80f5-2611a2deca14-tigera-ca-bundle\") pod \"calico-node-qt95j\" (UID: \"e8930a23-4cfe-4264-80f5-2611a2deca14\") " pod="calico-system/calico-node-qt95j" Dec 13 05:55:43.318013 kubelet[2663]: E1213 05:55:43.317961 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.318013 kubelet[2663]: W1213 05:55:43.317994 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.318285 kubelet[2663]: E1213 05:55:43.318043 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.342467 kubelet[2663]: E1213 05:55:43.342282 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.342467 kubelet[2663]: W1213 05:55:43.342309 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.342467 kubelet[2663]: E1213 05:55:43.342341 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.363937 kubelet[2663]: E1213 05:55:43.363427 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4j64" podUID="3c45b9e9-6ab2-4973-81f3-6dae0df5a18c" Dec 13 05:55:43.383976 containerd[1496]: time="2024-12-13T05:55:43.383903185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-765c758d75-fx5x2,Uid:5b9c8704-c260-47f8-a76e-c75efc5eb2ab,Namespace:calico-system,Attempt:0,}" Dec 13 05:55:43.406990 kubelet[2663]: E1213 05:55:43.406945 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.406990 kubelet[2663]: W1213 05:55:43.406985 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.409296 kubelet[2663]: E1213 05:55:43.407012 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.409296 kubelet[2663]: E1213 05:55:43.407416 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.409296 kubelet[2663]: W1213 05:55:43.407430 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.409296 kubelet[2663]: E1213 05:55:43.407457 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.410242 kubelet[2663]: E1213 05:55:43.410219 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.410242 kubelet[2663]: W1213 05:55:43.410240 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.410460 kubelet[2663]: E1213 05:55:43.410258 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.410612 kubelet[2663]: E1213 05:55:43.410559 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.410612 kubelet[2663]: W1213 05:55:43.410573 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.410612 kubelet[2663]: E1213 05:55:43.410586 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.411286 kubelet[2663]: E1213 05:55:43.411259 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.411286 kubelet[2663]: W1213 05:55:43.411283 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.411417 kubelet[2663]: E1213 05:55:43.411301 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.412013 kubelet[2663]: E1213 05:55:43.411977 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.412175 kubelet[2663]: W1213 05:55:43.411999 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.412175 kubelet[2663]: E1213 05:55:43.412139 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.413176 kubelet[2663]: E1213 05:55:43.412714 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.413176 kubelet[2663]: W1213 05:55:43.412733 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.413176 kubelet[2663]: E1213 05:55:43.412748 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.413479 kubelet[2663]: E1213 05:55:43.413455 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.413479 kubelet[2663]: W1213 05:55:43.413475 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.414167 kubelet[2663]: E1213 05:55:43.413492 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.414296 kubelet[2663]: E1213 05:55:43.414275 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.414296 kubelet[2663]: W1213 05:55:43.414294 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.414456 kubelet[2663]: E1213 05:55:43.414313 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.415648 kubelet[2663]: E1213 05:55:43.415213 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.415648 kubelet[2663]: W1213 05:55:43.415239 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.415648 kubelet[2663]: E1213 05:55:43.415258 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.417391 kubelet[2663]: E1213 05:55:43.416215 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.417391 kubelet[2663]: W1213 05:55:43.416235 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.417391 kubelet[2663]: E1213 05:55:43.416251 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.418238 kubelet[2663]: E1213 05:55:43.417859 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.418238 kubelet[2663]: W1213 05:55:43.417880 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.418238 kubelet[2663]: E1213 05:55:43.417899 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.420337 kubelet[2663]: E1213 05:55:43.420195 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.420337 kubelet[2663]: W1213 05:55:43.420215 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.420337 kubelet[2663]: E1213 05:55:43.420232 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.421242 kubelet[2663]: E1213 05:55:43.421221 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.422000 kubelet[2663]: W1213 05:55:43.421571 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.422000 kubelet[2663]: E1213 05:55:43.421610 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.423169 kubelet[2663]: E1213 05:55:43.423028 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.423169 kubelet[2663]: W1213 05:55:43.423070 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.423169 kubelet[2663]: E1213 05:55:43.423101 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.427275 kubelet[2663]: E1213 05:55:43.427247 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.427275 kubelet[2663]: W1213 05:55:43.427271 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.427440 kubelet[2663]: E1213 05:55:43.427289 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.428887 kubelet[2663]: E1213 05:55:43.428459 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.428887 kubelet[2663]: W1213 05:55:43.428481 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.428887 kubelet[2663]: E1213 05:55:43.428513 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.428887 kubelet[2663]: I1213 05:55:43.428544 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c45b9e9-6ab2-4973-81f3-6dae0df5a18c-kubelet-dir\") pod \"csi-node-driver-k4j64\" (UID: \"3c45b9e9-6ab2-4973-81f3-6dae0df5a18c\") " pod="calico-system/csi-node-driver-k4j64" Dec 13 05:55:43.430067 kubelet[2663]: E1213 05:55:43.429974 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.430067 kubelet[2663]: W1213 05:55:43.429999 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.430067 kubelet[2663]: E1213 05:55:43.430037 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.430284 kubelet[2663]: I1213 05:55:43.430090 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3c45b9e9-6ab2-4973-81f3-6dae0df5a18c-varrun\") pod \"csi-node-driver-k4j64\" (UID: \"3c45b9e9-6ab2-4973-81f3-6dae0df5a18c\") " pod="calico-system/csi-node-driver-k4j64" Dec 13 05:55:43.431150 kubelet[2663]: E1213 05:55:43.430428 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.431150 kubelet[2663]: W1213 05:55:43.430451 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.431150 kubelet[2663]: E1213 05:55:43.430552 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.431150 kubelet[2663]: I1213 05:55:43.430585 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3c45b9e9-6ab2-4973-81f3-6dae0df5a18c-socket-dir\") pod \"csi-node-driver-k4j64\" (UID: \"3c45b9e9-6ab2-4973-81f3-6dae0df5a18c\") " pod="calico-system/csi-node-driver-k4j64" Dec 13 05:55:43.432804 kubelet[2663]: E1213 05:55:43.432616 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.432804 kubelet[2663]: W1213 05:55:43.432640 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.434173 kubelet[2663]: E1213 05:55:43.433934 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.434173 kubelet[2663]: W1213 05:55:43.433957 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.434296 kubelet[2663]: E1213 05:55:43.434240 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.434296 kubelet[2663]: E1213 05:55:43.434266 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.435028 kubelet[2663]: E1213 05:55:43.434622 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.435028 kubelet[2663]: W1213 05:55:43.434653 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.435028 kubelet[2663]: E1213 05:55:43.434920 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.435947 kubelet[2663]: E1213 05:55:43.435706 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.435947 kubelet[2663]: W1213 05:55:43.435728 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.436304 kubelet[2663]: E1213 05:55:43.436071 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.436304 kubelet[2663]: I1213 05:55:43.436104 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3c45b9e9-6ab2-4973-81f3-6dae0df5a18c-registration-dir\") pod \"csi-node-driver-k4j64\" (UID: \"3c45b9e9-6ab2-4973-81f3-6dae0df5a18c\") " pod="calico-system/csi-node-driver-k4j64" Dec 13 05:55:43.437587 kubelet[2663]: E1213 05:55:43.436615 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.437587 kubelet[2663]: W1213 05:55:43.436630 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.437587 kubelet[2663]: E1213 05:55:43.437186 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.437779 kubelet[2663]: E1213 05:55:43.437625 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.437779 kubelet[2663]: W1213 05:55:43.437639 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.437934 kubelet[2663]: E1213 05:55:43.437906 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.438593 kubelet[2663]: E1213 05:55:43.438559 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.438593 kubelet[2663]: W1213 05:55:43.438583 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.438714 kubelet[2663]: E1213 05:55:43.438639 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.439857 kubelet[2663]: E1213 05:55:43.439784 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.439857 kubelet[2663]: W1213 05:55:43.439807 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.439857 kubelet[2663]: E1213 05:55:43.439835 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.440875 kubelet[2663]: E1213 05:55:43.440782 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.440875 kubelet[2663]: W1213 05:55:43.440804 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.441372 kubelet[2663]: E1213 05:55:43.441244 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.441821 kubelet[2663]: E1213 05:55:43.441795 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.441821 kubelet[2663]: W1213 05:55:43.441817 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.441960 kubelet[2663]: E1213 05:55:43.441834 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.443120 kubelet[2663]: E1213 05:55:43.442659 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.443120 kubelet[2663]: W1213 05:55:43.442682 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.443120 kubelet[2663]: E1213 05:55:43.442720 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.444219 kubelet[2663]: E1213 05:55:43.443596 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.444219 kubelet[2663]: W1213 05:55:43.443625 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.444219 kubelet[2663]: E1213 05:55:43.443647 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.445039 kubelet[2663]: E1213 05:55:43.444682 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.445039 kubelet[2663]: W1213 05:55:43.444701 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.445039 kubelet[2663]: E1213 05:55:43.444730 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.448143 containerd[1496]: time="2024-12-13T05:55:43.446327240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:55:43.448143 containerd[1496]: time="2024-12-13T05:55:43.446450241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:55:43.448143 containerd[1496]: time="2024-12-13T05:55:43.446485940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:43.448143 containerd[1496]: time="2024-12-13T05:55:43.446630683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:43.494322 systemd[1]: Started cri-containerd-0f63c557a348ab8d96254026932e9af6dd4c1b35400bf7a5c54a4224de513f5b.scope - libcontainer container 0f63c557a348ab8d96254026932e9af6dd4c1b35400bf7a5c54a4224de513f5b. Dec 13 05:55:43.501735 containerd[1496]: time="2024-12-13T05:55:43.501680867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qt95j,Uid:e8930a23-4cfe-4264-80f5-2611a2deca14,Namespace:calico-system,Attempt:0,}" Dec 13 05:55:43.543389 kubelet[2663]: E1213 05:55:43.543033 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.543389 kubelet[2663]: W1213 05:55:43.543088 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.543389 kubelet[2663]: E1213 05:55:43.543134 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.543944 kubelet[2663]: E1213 05:55:43.543799 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.543944 kubelet[2663]: W1213 05:55:43.543812 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.543944 kubelet[2663]: E1213 05:55:43.543844 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.545001 kubelet[2663]: E1213 05:55:43.544972 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.545001 kubelet[2663]: W1213 05:55:43.544994 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.545148 kubelet[2663]: E1213 05:55:43.545057 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.545676 kubelet[2663]: E1213 05:55:43.545389 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.545676 kubelet[2663]: W1213 05:55:43.545411 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.545676 kubelet[2663]: E1213 05:55:43.545429 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.546241 kubelet[2663]: E1213 05:55:43.546196 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.546241 kubelet[2663]: W1213 05:55:43.546218 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.546936 kubelet[2663]: E1213 05:55:43.546854 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.547350 kubelet[2663]: E1213 05:55:43.547318 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.547350 kubelet[2663]: W1213 05:55:43.547339 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.547812 kubelet[2663]: E1213 05:55:43.547544 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.548181 kubelet[2663]: E1213 05:55:43.548156 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.548181 kubelet[2663]: W1213 05:55:43.548178 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.548490 kubelet[2663]: E1213 05:55:43.548275 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.548490 kubelet[2663]: I1213 05:55:43.548307 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6c6s\" (UniqueName: \"kubernetes.io/projected/3c45b9e9-6ab2-4973-81f3-6dae0df5a18c-kube-api-access-k6c6s\") pod \"csi-node-driver-k4j64\" (UID: \"3c45b9e9-6ab2-4973-81f3-6dae0df5a18c\") " pod="calico-system/csi-node-driver-k4j64" Dec 13 05:55:43.548938 kubelet[2663]: E1213 05:55:43.548864 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.548938 kubelet[2663]: W1213 05:55:43.548885 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.549398 kubelet[2663]: E1213 05:55:43.549353 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.549590 kubelet[2663]: E1213 05:55:43.549567 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.549590 kubelet[2663]: W1213 05:55:43.549589 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.550091 kubelet[2663]: E1213 05:55:43.550059 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.550492 kubelet[2663]: E1213 05:55:43.550465 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.550492 kubelet[2663]: W1213 05:55:43.550487 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.550789 kubelet[2663]: E1213 05:55:43.550747 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.552592 kubelet[2663]: E1213 05:55:43.552183 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.552592 kubelet[2663]: W1213 05:55:43.552206 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.552592 kubelet[2663]: E1213 05:55:43.552585 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.554283 kubelet[2663]: W1213 05:55:43.552599 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.554283 kubelet[2663]: E1213 05:55:43.552839 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.554283 kubelet[2663]: E1213 05:55:43.552903 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.554838 kubelet[2663]: E1213 05:55:43.554302 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.554838 kubelet[2663]: W1213 05:55:43.554316 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.554838 kubelet[2663]: E1213 05:55:43.554411 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.554838 kubelet[2663]: E1213 05:55:43.554641 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.554838 kubelet[2663]: W1213 05:55:43.554659 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.554838 kubelet[2663]: E1213 05:55:43.554750 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.555718 kubelet[2663]: E1213 05:55:43.554983 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.555718 kubelet[2663]: W1213 05:55:43.554997 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.555718 kubelet[2663]: E1213 05:55:43.555088 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.555718 kubelet[2663]: E1213 05:55:43.555342 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.555718 kubelet[2663]: W1213 05:55:43.555355 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.555718 kubelet[2663]: E1213 05:55:43.555444 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.555718 kubelet[2663]: E1213 05:55:43.555676 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.555718 kubelet[2663]: W1213 05:55:43.555689 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.558746 kubelet[2663]: E1213 05:55:43.555734 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.558746 kubelet[2663]: E1213 05:55:43.557172 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.558746 kubelet[2663]: W1213 05:55:43.557187 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.558746 kubelet[2663]: E1213 05:55:43.557284 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.558746 kubelet[2663]: E1213 05:55:43.557560 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.558746 kubelet[2663]: W1213 05:55:43.557574 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.558746 kubelet[2663]: E1213 05:55:43.557681 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.558746 kubelet[2663]: E1213 05:55:43.557871 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.558746 kubelet[2663]: W1213 05:55:43.557906 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.558746 kubelet[2663]: E1213 05:55:43.557956 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.560380 kubelet[2663]: E1213 05:55:43.558281 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.560380 kubelet[2663]: W1213 05:55:43.558294 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.560380 kubelet[2663]: E1213 05:55:43.558341 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.560380 kubelet[2663]: E1213 05:55:43.558777 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.560380 kubelet[2663]: W1213 05:55:43.558801 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.560380 kubelet[2663]: E1213 05:55:43.560215 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.562232 kubelet[2663]: E1213 05:55:43.560795 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.562232 kubelet[2663]: W1213 05:55:43.560816 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.562232 kubelet[2663]: E1213 05:55:43.560833 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.573608 containerd[1496]: time="2024-12-13T05:55:43.572728025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:55:43.573608 containerd[1496]: time="2024-12-13T05:55:43.572861937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:55:43.573608 containerd[1496]: time="2024-12-13T05:55:43.572887870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:43.573608 containerd[1496]: time="2024-12-13T05:55:43.573027586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:55:43.620466 systemd[1]: Started cri-containerd-2e45d5be902bde6873aada64a5656871f7b4f91ba2f022fac938b4109664ce12.scope - libcontainer container 2e45d5be902bde6873aada64a5656871f7b4f91ba2f022fac938b4109664ce12. Dec 13 05:55:43.654014 kubelet[2663]: E1213 05:55:43.653958 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.654014 kubelet[2663]: W1213 05:55:43.653992 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.654244 kubelet[2663]: E1213 05:55:43.654031 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.655239 kubelet[2663]: E1213 05:55:43.655188 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.655239 kubelet[2663]: W1213 05:55:43.655211 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.655239 kubelet[2663]: E1213 05:55:43.655228 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.655982 kubelet[2663]: E1213 05:55:43.655956 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.655982 kubelet[2663]: W1213 05:55:43.655980 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.656267 kubelet[2663]: E1213 05:55:43.655997 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.656741 kubelet[2663]: E1213 05:55:43.656717 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.656741 kubelet[2663]: W1213 05:55:43.656740 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.656873 kubelet[2663]: E1213 05:55:43.656758 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.657642 kubelet[2663]: E1213 05:55:43.657580 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.657642 kubelet[2663]: W1213 05:55:43.657602 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.657642 kubelet[2663]: E1213 05:55:43.657620 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.681344 kubelet[2663]: E1213 05:55:43.681263 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:43.681344 kubelet[2663]: W1213 05:55:43.681291 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:43.681344 kubelet[2663]: E1213 05:55:43.681314 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:43.689085 containerd[1496]: time="2024-12-13T05:55:43.688940153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-765c758d75-fx5x2,Uid:5b9c8704-c260-47f8-a76e-c75efc5eb2ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f63c557a348ab8d96254026932e9af6dd4c1b35400bf7a5c54a4224de513f5b\"" Dec 13 05:55:43.694636 containerd[1496]: time="2024-12-13T05:55:43.694136897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 05:55:43.723200 containerd[1496]: time="2024-12-13T05:55:43.723151877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qt95j,Uid:e8930a23-4cfe-4264-80f5-2611a2deca14,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e45d5be902bde6873aada64a5656871f7b4f91ba2f022fac938b4109664ce12\"" Dec 13 05:55:45.143609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231404649.mount: Deactivated successfully. Dec 13 05:55:45.524921 kubelet[2663]: E1213 05:55:45.523139 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4j64" podUID="3c45b9e9-6ab2-4973-81f3-6dae0df5a18c" Dec 13 05:55:46.484809 containerd[1496]: time="2024-12-13T05:55:46.484746490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:46.486855 containerd[1496]: time="2024-12-13T05:55:46.486651031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 05:55:46.488197 containerd[1496]: time="2024-12-13T05:55:46.487664149Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:46.490597 containerd[1496]: time="2024-12-13T05:55:46.490559784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:46.492186 containerd[1496]: time="2024-12-13T05:55:46.492145057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.797945157s" Dec 13 05:55:46.492369 containerd[1496]: time="2024-12-13T05:55:46.492327077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 05:55:46.502326 containerd[1496]: time="2024-12-13T05:55:46.502285175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 05:55:46.543097 containerd[1496]: time="2024-12-13T05:55:46.543046359Z" level=info msg="CreateContainer within sandbox \"0f63c557a348ab8d96254026932e9af6dd4c1b35400bf7a5c54a4224de513f5b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 05:55:46.563403 containerd[1496]: time="2024-12-13T05:55:46.563198625Z" level=info msg="CreateContainer within sandbox \"0f63c557a348ab8d96254026932e9af6dd4c1b35400bf7a5c54a4224de513f5b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c78527abe805feb1d07117eff2afc6ee873c93e8e4e940bcd22156ac72270906\"" Dec 13 05:55:46.568155 containerd[1496]: time="2024-12-13T05:55:46.568075783Z" level=info msg="StartContainer for \"c78527abe805feb1d07117eff2afc6ee873c93e8e4e940bcd22156ac72270906\"" Dec 13 05:55:46.620796 systemd[1]: Started cri-containerd-c78527abe805feb1d07117eff2afc6ee873c93e8e4e940bcd22156ac72270906.scope - libcontainer container c78527abe805feb1d07117eff2afc6ee873c93e8e4e940bcd22156ac72270906. Dec 13 05:55:46.687413 containerd[1496]: time="2024-12-13T05:55:46.687205897Z" level=info msg="StartContainer for \"c78527abe805feb1d07117eff2afc6ee873c93e8e4e940bcd22156ac72270906\" returns successfully" Dec 13 05:55:47.532967 kubelet[2663]: E1213 05:55:47.532865 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4j64" podUID="3c45b9e9-6ab2-4973-81f3-6dae0df5a18c" Dec 13 05:55:47.661335 kubelet[2663]: I1213 05:55:47.661232 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-765c758d75-fx5x2" podStartSLOduration=1.851812905 podStartE2EDuration="4.661212487s" podCreationTimestamp="2024-12-13 05:55:43 +0000 UTC" firstStartedPulling="2024-12-13 05:55:43.692547202 +0000 UTC m=+15.319831714" lastFinishedPulling="2024-12-13 05:55:46.501946789 +0000 UTC m=+18.129231296" observedRunningTime="2024-12-13 05:55:47.66005601 +0000 UTC m=+19.287340546" watchObservedRunningTime="2024-12-13 05:55:47.661212487 +0000 UTC m=+19.288497005" Dec 13 05:55:47.679372 kubelet[2663]: E1213 05:55:47.679337 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.679372 kubelet[2663]: W1213 05:55:47.679370 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.679595 kubelet[2663]: E1213 05:55:47.679396 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.680088 kubelet[2663]: E1213 05:55:47.679740 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.680088 kubelet[2663]: W1213 05:55:47.679761 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.680088 kubelet[2663]: E1213 05:55:47.679791 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.680269 kubelet[2663]: E1213 05:55:47.680146 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.680269 kubelet[2663]: W1213 05:55:47.680161 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.680269 kubelet[2663]: E1213 05:55:47.680177 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.680534 kubelet[2663]: E1213 05:55:47.680511 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.680534 kubelet[2663]: W1213 05:55:47.680531 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.680688 kubelet[2663]: E1213 05:55:47.680548 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.681110 kubelet[2663]: E1213 05:55:47.680840 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.681110 kubelet[2663]: W1213 05:55:47.680862 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.681110 kubelet[2663]: E1213 05:55:47.680886 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.681430 kubelet[2663]: E1213 05:55:47.681297 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.681430 kubelet[2663]: W1213 05:55:47.681318 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.681430 kubelet[2663]: E1213 05:55:47.681334 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.681735 kubelet[2663]: E1213 05:55:47.681645 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.681735 kubelet[2663]: W1213 05:55:47.681659 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.681735 kubelet[2663]: E1213 05:55:47.681672 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.682208 kubelet[2663]: E1213 05:55:47.682186 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.682279 kubelet[2663]: W1213 05:55:47.682237 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.682279 kubelet[2663]: E1213 05:55:47.682257 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.682721 kubelet[2663]: E1213 05:55:47.682630 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.682721 kubelet[2663]: W1213 05:55:47.682645 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.682721 kubelet[2663]: E1213 05:55:47.682669 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.683017 kubelet[2663]: E1213 05:55:47.682994 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.683017 kubelet[2663]: W1213 05:55:47.683014 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.683140 kubelet[2663]: E1213 05:55:47.683030 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.683438 kubelet[2663]: E1213 05:55:47.683415 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.683516 kubelet[2663]: W1213 05:55:47.683436 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.683516 kubelet[2663]: E1213 05:55:47.683459 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.683750 kubelet[2663]: E1213 05:55:47.683730 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.683750 kubelet[2663]: W1213 05:55:47.683751 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.683876 kubelet[2663]: E1213 05:55:47.683766 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.684559 kubelet[2663]: E1213 05:55:47.684059 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.684559 kubelet[2663]: W1213 05:55:47.684079 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.684559 kubelet[2663]: E1213 05:55:47.684094 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.684559 kubelet[2663]: E1213 05:55:47.684435 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.684559 kubelet[2663]: W1213 05:55:47.684451 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.684559 kubelet[2663]: E1213 05:55:47.684465 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.684852 kubelet[2663]: E1213 05:55:47.684738 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.684852 kubelet[2663]: W1213 05:55:47.684751 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.684852 kubelet[2663]: E1213 05:55:47.684765 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.690644 kubelet[2663]: E1213 05:55:47.690557 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.690644 kubelet[2663]: W1213 05:55:47.690579 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.690644 kubelet[2663]: E1213 05:55:47.690596 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.691478 kubelet[2663]: E1213 05:55:47.690899 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.691478 kubelet[2663]: W1213 05:55:47.690912 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.691478 kubelet[2663]: E1213 05:55:47.690937 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.691478 kubelet[2663]: E1213 05:55:47.691264 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.691478 kubelet[2663]: W1213 05:55:47.691289 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.691478 kubelet[2663]: E1213 05:55:47.691313 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.692168 kubelet[2663]: E1213 05:55:47.692046 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.692168 kubelet[2663]: W1213 05:55:47.692070 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.692168 kubelet[2663]: E1213 05:55:47.692101 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.692384 kubelet[2663]: E1213 05:55:47.692358 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.692384 kubelet[2663]: W1213 05:55:47.692378 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.692680 kubelet[2663]: E1213 05:55:47.692402 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.692680 kubelet[2663]: E1213 05:55:47.692671 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.692780 kubelet[2663]: W1213 05:55:47.692688 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.692901 kubelet[2663]: E1213 05:55:47.692866 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.693004 kubelet[2663]: E1213 05:55:47.692977 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.693004 kubelet[2663]: W1213 05:55:47.692993 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.693210 kubelet[2663]: E1213 05:55:47.693031 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.693311 kubelet[2663]: E1213 05:55:47.693256 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.693311 kubelet[2663]: W1213 05:55:47.693274 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.693311 kubelet[2663]: E1213 05:55:47.693297 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.693551 kubelet[2663]: E1213 05:55:47.693532 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.693551 kubelet[2663]: W1213 05:55:47.693550 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.693694 kubelet[2663]: E1213 05:55:47.693572 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.693887 kubelet[2663]: E1213 05:55:47.693869 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.693983 kubelet[2663]: W1213 05:55:47.693888 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.693983 kubelet[2663]: E1213 05:55:47.693908 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.694471 kubelet[2663]: E1213 05:55:47.694380 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.694471 kubelet[2663]: W1213 05:55:47.694399 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.694471 kubelet[2663]: E1213 05:55:47.694431 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.694689 kubelet[2663]: E1213 05:55:47.694669 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.694689 kubelet[2663]: W1213 05:55:47.694689 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.695007 kubelet[2663]: E1213 05:55:47.694711 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.695007 kubelet[2663]: E1213 05:55:47.694990 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.695007 kubelet[2663]: W1213 05:55:47.695004 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.695306 kubelet[2663]: E1213 05:55:47.695026 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.695521 kubelet[2663]: E1213 05:55:47.695424 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.695521 kubelet[2663]: W1213 05:55:47.695449 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.695521 kubelet[2663]: E1213 05:55:47.695480 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.695749 kubelet[2663]: E1213 05:55:47.695730 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.695818 kubelet[2663]: W1213 05:55:47.695751 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.695818 kubelet[2663]: E1213 05:55:47.695775 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.696454 kubelet[2663]: E1213 05:55:47.696242 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.696454 kubelet[2663]: W1213 05:55:47.696261 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.696454 kubelet[2663]: E1213 05:55:47.696295 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.697246 kubelet[2663]: E1213 05:55:47.697199 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.697246 kubelet[2663]: W1213 05:55:47.697220 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.697246 kubelet[2663]: E1213 05:55:47.697238 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:47.697513 kubelet[2663]: E1213 05:55:47.697494 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:47.697513 kubelet[2663]: W1213 05:55:47.697512 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:47.697621 kubelet[2663]: E1213 05:55:47.697528 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.487339 containerd[1496]: time="2024-12-13T05:55:48.487240693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:48.488888 containerd[1496]: time="2024-12-13T05:55:48.488665781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 05:55:48.489955 containerd[1496]: time="2024-12-13T05:55:48.489891359Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:48.493748 containerd[1496]: time="2024-12-13T05:55:48.493668326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:48.495164 containerd[1496]: time="2024-12-13T05:55:48.494835412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.991339944s" Dec 13 05:55:48.495164 containerd[1496]: time="2024-12-13T05:55:48.494885605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 05:55:48.501944 containerd[1496]: time="2024-12-13T05:55:48.501729610Z" level=info msg="CreateContainer within sandbox \"2e45d5be902bde6873aada64a5656871f7b4f91ba2f022fac938b4109664ce12\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 05:55:48.543930 containerd[1496]: time="2024-12-13T05:55:48.543719846Z" level=info msg="CreateContainer within sandbox \"2e45d5be902bde6873aada64a5656871f7b4f91ba2f022fac938b4109664ce12\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ad2a66f0edfd63b755b4d9df56462b3b62876d4f0086ae0061e554d5e459cdfb\"" Dec 13 05:55:48.545493 containerd[1496]: time="2024-12-13T05:55:48.545382615Z" level=info msg="StartContainer for \"ad2a66f0edfd63b755b4d9df56462b3b62876d4f0086ae0061e554d5e459cdfb\"" Dec 13 05:55:48.632368 systemd[1]: Started cri-containerd-ad2a66f0edfd63b755b4d9df56462b3b62876d4f0086ae0061e554d5e459cdfb.scope - libcontainer container ad2a66f0edfd63b755b4d9df56462b3b62876d4f0086ae0061e554d5e459cdfb. Dec 13 05:55:48.648404 kubelet[2663]: I1213 05:55:48.648344 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:55:48.690414 containerd[1496]: time="2024-12-13T05:55:48.690340254Z" level=info msg="StartContainer for \"ad2a66f0edfd63b755b4d9df56462b3b62876d4f0086ae0061e554d5e459cdfb\" returns successfully" Dec 13 05:55:48.693540 kubelet[2663]: E1213 05:55:48.693309 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.693648 kubelet[2663]: W1213 05:55:48.693604 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.693713 kubelet[2663]: E1213 05:55:48.693636 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.694394 kubelet[2663]: E1213 05:55:48.694174 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.694394 kubelet[2663]: W1213 05:55:48.694188 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.694394 kubelet[2663]: E1213 05:55:48.694204 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.694827 kubelet[2663]: E1213 05:55:48.694607 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.694898 kubelet[2663]: W1213 05:55:48.694859 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.694898 kubelet[2663]: E1213 05:55:48.694884 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.695609 kubelet[2663]: E1213 05:55:48.695445 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.695609 kubelet[2663]: W1213 05:55:48.695465 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.695609 kubelet[2663]: E1213 05:55:48.695480 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.696061 kubelet[2663]: E1213 05:55:48.696035 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.696061 kubelet[2663]: W1213 05:55:48.696055 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.696195 kubelet[2663]: E1213 05:55:48.696075 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.696701 kubelet[2663]: E1213 05:55:48.696552 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.696701 kubelet[2663]: W1213 05:55:48.696572 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.696701 kubelet[2663]: E1213 05:55:48.696596 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.697302 kubelet[2663]: E1213 05:55:48.697278 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.697302 kubelet[2663]: W1213 05:55:48.697300 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.698344 kubelet[2663]: E1213 05:55:48.697317 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.698344 kubelet[2663]: E1213 05:55:48.698169 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.698344 kubelet[2663]: W1213 05:55:48.698183 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.698344 kubelet[2663]: E1213 05:55:48.698224 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.698965 kubelet[2663]: E1213 05:55:48.698906 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.698965 kubelet[2663]: W1213 05:55:48.698932 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.698965 kubelet[2663]: E1213 05:55:48.698948 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.700804 kubelet[2663]: E1213 05:55:48.699559 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.700804 kubelet[2663]: W1213 05:55:48.699579 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.700804 kubelet[2663]: E1213 05:55:48.699596 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.700804 kubelet[2663]: E1213 05:55:48.700075 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.700804 kubelet[2663]: W1213 05:55:48.700089 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.700804 kubelet[2663]: E1213 05:55:48.700105 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.700804 kubelet[2663]: E1213 05:55:48.700738 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.700804 kubelet[2663]: W1213 05:55:48.700753 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.700804 kubelet[2663]: E1213 05:55:48.700767 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.702094 kubelet[2663]: E1213 05:55:48.702072 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.702094 kubelet[2663]: W1213 05:55:48.702093 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.702284 kubelet[2663]: E1213 05:55:48.702110 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.702820 kubelet[2663]: E1213 05:55:48.702799 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.702820 kubelet[2663]: W1213 05:55:48.702819 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.702972 kubelet[2663]: E1213 05:55:48.702835 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.703389 kubelet[2663]: E1213 05:55:48.703314 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:55:48.703389 kubelet[2663]: W1213 05:55:48.703333 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:55:48.703389 kubelet[2663]: E1213 05:55:48.703361 2663 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:55:48.722179 systemd[1]: cri-containerd-ad2a66f0edfd63b755b4d9df56462b3b62876d4f0086ae0061e554d5e459cdfb.scope: Deactivated successfully. Dec 13 05:55:48.756068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad2a66f0edfd63b755b4d9df56462b3b62876d4f0086ae0061e554d5e459cdfb-rootfs.mount: Deactivated successfully. Dec 13 05:55:48.972688 containerd[1496]: time="2024-12-13T05:55:48.949649675Z" level=info msg="shim disconnected" id=ad2a66f0edfd63b755b4d9df56462b3b62876d4f0086ae0061e554d5e459cdfb namespace=k8s.io Dec 13 05:55:48.972688 containerd[1496]: time="2024-12-13T05:55:48.972657763Z" level=warning msg="cleaning up after shim disconnected" id=ad2a66f0edfd63b755b4d9df56462b3b62876d4f0086ae0061e554d5e459cdfb namespace=k8s.io Dec 13 05:55:48.972688 containerd[1496]: time="2024-12-13T05:55:48.972695471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:55:49.523407 kubelet[2663]: E1213 05:55:49.523279 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4j64" podUID="3c45b9e9-6ab2-4973-81f3-6dae0df5a18c" Dec 13 05:55:49.660085 containerd[1496]: time="2024-12-13T05:55:49.659944544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 05:55:50.565643 kubelet[2663]: I1213 05:55:50.565598 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:55:51.523029 kubelet[2663]: E1213 05:55:51.522918 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4j64" podUID="3c45b9e9-6ab2-4973-81f3-6dae0df5a18c" Dec 13 05:55:53.524378 kubelet[2663]: E1213 05:55:53.523773 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4j64" podUID="3c45b9e9-6ab2-4973-81f3-6dae0df5a18c" Dec 13 05:55:55.523723 kubelet[2663]: E1213 05:55:55.523643 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4j64" podUID="3c45b9e9-6ab2-4973-81f3-6dae0df5a18c" Dec 13 05:55:55.785908 containerd[1496]: time="2024-12-13T05:55:55.785781997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:55.787287 containerd[1496]: time="2024-12-13T05:55:55.786930414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 05:55:55.787991 containerd[1496]: time="2024-12-13T05:55:55.787949450Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:55.792269 containerd[1496]: time="2024-12-13T05:55:55.792226857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:55:55.793225 containerd[1496]: time="2024-12-13T05:55:55.793183259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.133120489s" Dec 13 05:55:55.793325 containerd[1496]: time="2024-12-13T05:55:55.793231570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 05:55:55.798712 containerd[1496]: time="2024-12-13T05:55:55.798588150Z" level=info msg="CreateContainer within sandbox \"2e45d5be902bde6873aada64a5656871f7b4f91ba2f022fac938b4109664ce12\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 05:55:55.820111 containerd[1496]: time="2024-12-13T05:55:55.819969115Z" level=info msg="CreateContainer within sandbox \"2e45d5be902bde6873aada64a5656871f7b4f91ba2f022fac938b4109664ce12\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b8456f821a757dc7f6dab670dd2204d1fcea5e94ced753e750ba68c7d33c3578\"" Dec 13 05:55:55.821984 containerd[1496]: time="2024-12-13T05:55:55.821022742Z" level=info msg="StartContainer for \"b8456f821a757dc7f6dab670dd2204d1fcea5e94ced753e750ba68c7d33c3578\"" Dec 13 05:55:55.902344 systemd[1]: Started cri-containerd-b8456f821a757dc7f6dab670dd2204d1fcea5e94ced753e750ba68c7d33c3578.scope - libcontainer container b8456f821a757dc7f6dab670dd2204d1fcea5e94ced753e750ba68c7d33c3578. Dec 13 05:55:55.952641 containerd[1496]: time="2024-12-13T05:55:55.952587598Z" level=info msg="StartContainer for \"b8456f821a757dc7f6dab670dd2204d1fcea5e94ced753e750ba68c7d33c3578\" returns successfully" Dec 13 05:55:56.784092 systemd[1]: cri-containerd-b8456f821a757dc7f6dab670dd2204d1fcea5e94ced753e750ba68c7d33c3578.scope: Deactivated successfully. Dec 13 05:55:56.830917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8456f821a757dc7f6dab670dd2204d1fcea5e94ced753e750ba68c7d33c3578-rootfs.mount: Deactivated successfully. Dec 13 05:55:57.012564 containerd[1496]: time="2024-12-13T05:55:57.012441310Z" level=info msg="shim disconnected" id=b8456f821a757dc7f6dab670dd2204d1fcea5e94ced753e750ba68c7d33c3578 namespace=k8s.io Dec 13 05:55:57.012564 containerd[1496]: time="2024-12-13T05:55:57.012546765Z" level=warning msg="cleaning up after shim disconnected" id=b8456f821a757dc7f6dab670dd2204d1fcea5e94ced753e750ba68c7d33c3578 namespace=k8s.io Dec 13 05:55:57.012564 containerd[1496]: time="2024-12-13T05:55:57.012572662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:55:57.033593 containerd[1496]: time="2024-12-13T05:55:57.033103439Z" level=warning msg="cleanup warnings time=\"2024-12-13T05:55:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 05:55:57.037405 kubelet[2663]: I1213 05:55:57.037316 2663 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 05:55:57.092690 kubelet[2663]: W1213 05:55:57.092643 2663 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-kh3sk.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-kh3sk.gb1.brightbox.com' and this object Dec 13 05:55:57.092850 kubelet[2663]: E1213 05:55:57.092701 2663 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-kh3sk.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-kh3sk.gb1.brightbox.com' and this object" logger="UnhandledError" Dec 13 05:55:57.098810 systemd[1]: Created slice kubepods-burstable-pod1222d645_a9c3_4118_adcb_40dbd48ca62c.slice - libcontainer container kubepods-burstable-pod1222d645_a9c3_4118_adcb_40dbd48ca62c.slice. Dec 13 05:55:57.110405 systemd[1]: Created slice kubepods-burstable-podbce5c938_2412_4bd8_bdc5_eb3ae91af6a3.slice - libcontainer container kubepods-burstable-podbce5c938_2412_4bd8_bdc5_eb3ae91af6a3.slice. Dec 13 05:55:57.121859 systemd[1]: Created slice kubepods-besteffort-pod7b95cf7e_357f_49c3_a82d_c2fe116243f9.slice - libcontainer container kubepods-besteffort-pod7b95cf7e_357f_49c3_a82d_c2fe116243f9.slice. Dec 13 05:55:57.130424 systemd[1]: Created slice kubepods-besteffort-pod2ea0d766_8ed2_408a_bdc9_39aa78d68aef.slice - libcontainer container kubepods-besteffort-pod2ea0d766_8ed2_408a_bdc9_39aa78d68aef.slice. Dec 13 05:55:57.140755 systemd[1]: Created slice kubepods-besteffort-podc4ca8aab_4bca_427a_b3f8_a2ed09442b71.slice - libcontainer container kubepods-besteffort-podc4ca8aab_4bca_427a_b3f8_a2ed09442b71.slice. Dec 13 05:55:57.168918 kubelet[2663]: I1213 05:55:57.168862 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ngt\" (UniqueName: \"kubernetes.io/projected/7b95cf7e-357f-49c3-a82d-c2fe116243f9-kube-api-access-d8ngt\") pod \"calico-kube-controllers-64d98ccd4f-zshqn\" (UID: \"7b95cf7e-357f-49c3-a82d-c2fe116243f9\") " pod="calico-system/calico-kube-controllers-64d98ccd4f-zshqn" Dec 13 05:55:57.169577 kubelet[2663]: I1213 05:55:57.168966 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1222d645-a9c3-4118-adcb-40dbd48ca62c-config-volume\") pod \"coredns-6f6b679f8f-lqlzm\" (UID: \"1222d645-a9c3-4118-adcb-40dbd48ca62c\") " pod="kube-system/coredns-6f6b679f8f-lqlzm" Dec 13 05:55:57.169577 kubelet[2663]: I1213 05:55:57.169060 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkl8q\" (UniqueName: \"kubernetes.io/projected/c4ca8aab-4bca-427a-b3f8-a2ed09442b71-kube-api-access-kkl8q\") pod \"calico-apiserver-78d8447597-nztwj\" (UID: \"c4ca8aab-4bca-427a-b3f8-a2ed09442b71\") " pod="calico-apiserver/calico-apiserver-78d8447597-nztwj" Dec 13 05:55:57.169577 kubelet[2663]: I1213 05:55:57.169146 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bce5c938-2412-4bd8-bdc5-eb3ae91af6a3-config-volume\") pod \"coredns-6f6b679f8f-5dh4s\" (UID: \"bce5c938-2412-4bd8-bdc5-eb3ae91af6a3\") " pod="kube-system/coredns-6f6b679f8f-5dh4s" Dec 13 05:55:57.169577 kubelet[2663]: I1213 05:55:57.169219 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4864\" (UniqueName: \"kubernetes.io/projected/1222d645-a9c3-4118-adcb-40dbd48ca62c-kube-api-access-x4864\") pod \"coredns-6f6b679f8f-lqlzm\" (UID: \"1222d645-a9c3-4118-adcb-40dbd48ca62c\") " pod="kube-system/coredns-6f6b679f8f-lqlzm" Dec 13 05:55:57.169577 kubelet[2663]: I1213 05:55:57.169295 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttc9n\" (UniqueName: \"kubernetes.io/projected/2ea0d766-8ed2-408a-bdc9-39aa78d68aef-kube-api-access-ttc9n\") pod \"calico-apiserver-78d8447597-jg7hn\" (UID: \"2ea0d766-8ed2-408a-bdc9-39aa78d68aef\") " pod="calico-apiserver/calico-apiserver-78d8447597-jg7hn" Dec 13 05:55:57.169881 kubelet[2663]: I1213 05:55:57.169333 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c4ca8aab-4bca-427a-b3f8-a2ed09442b71-calico-apiserver-certs\") pod \"calico-apiserver-78d8447597-nztwj\" (UID: \"c4ca8aab-4bca-427a-b3f8-a2ed09442b71\") " pod="calico-apiserver/calico-apiserver-78d8447597-nztwj" Dec 13 05:55:57.169881 kubelet[2663]: I1213 05:55:57.169397 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b95cf7e-357f-49c3-a82d-c2fe116243f9-tigera-ca-bundle\") pod \"calico-kube-controllers-64d98ccd4f-zshqn\" (UID: \"7b95cf7e-357f-49c3-a82d-c2fe116243f9\") " pod="calico-system/calico-kube-controllers-64d98ccd4f-zshqn" Dec 13 05:55:57.169881 kubelet[2663]: I1213 05:55:57.169430 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9scl9\" (UniqueName: \"kubernetes.io/projected/bce5c938-2412-4bd8-bdc5-eb3ae91af6a3-kube-api-access-9scl9\") pod \"coredns-6f6b679f8f-5dh4s\" (UID: \"bce5c938-2412-4bd8-bdc5-eb3ae91af6a3\") " pod="kube-system/coredns-6f6b679f8f-5dh4s" Dec 13 05:55:57.169881 kubelet[2663]: I1213 05:55:57.169499 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ea0d766-8ed2-408a-bdc9-39aa78d68aef-calico-apiserver-certs\") pod \"calico-apiserver-78d8447597-jg7hn\" (UID: \"2ea0d766-8ed2-408a-bdc9-39aa78d68aef\") " pod="calico-apiserver/calico-apiserver-78d8447597-jg7hn" Dec 13 05:55:57.405938 containerd[1496]: time="2024-12-13T05:55:57.405783401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lqlzm,Uid:1222d645-a9c3-4118-adcb-40dbd48ca62c,Namespace:kube-system,Attempt:0,}" Dec 13 05:55:57.415880 containerd[1496]: time="2024-12-13T05:55:57.415822963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5dh4s,Uid:bce5c938-2412-4bd8-bdc5-eb3ae91af6a3,Namespace:kube-system,Attempt:0,}" Dec 13 05:55:57.427203 containerd[1496]: time="2024-12-13T05:55:57.427170572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d98ccd4f-zshqn,Uid:7b95cf7e-357f-49c3-a82d-c2fe116243f9,Namespace:calico-system,Attempt:0,}" Dec 13 05:55:57.541606 systemd[1]: Created slice kubepods-besteffort-pod3c45b9e9_6ab2_4973_81f3_6dae0df5a18c.slice - libcontainer container kubepods-besteffort-pod3c45b9e9_6ab2_4973_81f3_6dae0df5a18c.slice. Dec 13 05:55:57.571310 containerd[1496]: time="2024-12-13T05:55:57.571218820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4j64,Uid:3c45b9e9-6ab2-4973-81f3-6dae0df5a18c,Namespace:calico-system,Attempt:0,}" Dec 13 05:55:57.709371 containerd[1496]: time="2024-12-13T05:55:57.708321346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 05:55:57.799009 containerd[1496]: time="2024-12-13T05:55:57.798946485Z" level=error msg="Failed to destroy network for sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.808980 containerd[1496]: time="2024-12-13T05:55:57.808926234Z" level=error msg="encountered an error cleaning up failed sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.809623 containerd[1496]: time="2024-12-13T05:55:57.809571968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d98ccd4f-zshqn,Uid:7b95cf7e-357f-49c3-a82d-c2fe116243f9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.810533 kubelet[2663]: E1213 05:55:57.810218 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.810533 kubelet[2663]: E1213 05:55:57.810339 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64d98ccd4f-zshqn" Dec 13 05:55:57.810533 kubelet[2663]: E1213 05:55:57.810380 2663 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64d98ccd4f-zshqn" Dec 13 05:55:57.812268 kubelet[2663]: E1213 05:55:57.810474 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64d98ccd4f-zshqn_calico-system(7b95cf7e-357f-49c3-a82d-c2fe116243f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64d98ccd4f-zshqn_calico-system(7b95cf7e-357f-49c3-a82d-c2fe116243f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64d98ccd4f-zshqn" podUID="7b95cf7e-357f-49c3-a82d-c2fe116243f9" Dec 13 05:55:57.823753 containerd[1496]: time="2024-12-13T05:55:57.823712280Z" level=error msg="Failed to destroy network for sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.824423 containerd[1496]: time="2024-12-13T05:55:57.824352719Z" level=error msg="encountered an error cleaning up failed sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.824653 containerd[1496]: time="2024-12-13T05:55:57.824570878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lqlzm,Uid:1222d645-a9c3-4118-adcb-40dbd48ca62c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.825488 kubelet[2663]: E1213 05:55:57.825010 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.825488 kubelet[2663]: E1213 05:55:57.825064 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lqlzm" Dec 13 05:55:57.825488 kubelet[2663]: E1213 05:55:57.825091 2663 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lqlzm" Dec 13 05:55:57.826699 kubelet[2663]: E1213 05:55:57.825181 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lqlzm_kube-system(1222d645-a9c3-4118-adcb-40dbd48ca62c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lqlzm_kube-system(1222d645-a9c3-4118-adcb-40dbd48ca62c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lqlzm" podUID="1222d645-a9c3-4118-adcb-40dbd48ca62c" Dec 13 05:55:57.843339 containerd[1496]: time="2024-12-13T05:55:57.842888454Z" level=error msg="Failed to destroy network for sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.843003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3-shm.mount: Deactivated successfully. Dec 13 05:55:57.849840 containerd[1496]: time="2024-12-13T05:55:57.849800663Z" level=error msg="encountered an error cleaning up failed sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.852280 containerd[1496]: time="2024-12-13T05:55:57.850180557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5dh4s,Uid:bce5c938-2412-4bd8-bdc5-eb3ae91af6a3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.851198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046-shm.mount: Deactivated successfully. Dec 13 05:55:57.852533 kubelet[2663]: E1213 05:55:57.852465 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.852607 kubelet[2663]: E1213 05:55:57.852545 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5dh4s" Dec 13 05:55:57.852607 kubelet[2663]: E1213 05:55:57.852572 2663 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5dh4s" Dec 13 05:55:57.852745 kubelet[2663]: E1213 05:55:57.852631 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5dh4s_kube-system(bce5c938-2412-4bd8-bdc5-eb3ae91af6a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5dh4s_kube-system(bce5c938-2412-4bd8-bdc5-eb3ae91af6a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5dh4s" podUID="bce5c938-2412-4bd8-bdc5-eb3ae91af6a3" Dec 13 05:55:57.856850 kubelet[2663]: E1213 05:55:57.855923 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.856850 kubelet[2663]: E1213 05:55:57.855975 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k4j64" Dec 13 05:55:57.856850 kubelet[2663]: E1213 05:55:57.856000 2663 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k4j64" Dec 13 05:55:57.856439 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc-shm.mount: Deactivated successfully. Dec 13 05:55:57.858896 containerd[1496]: time="2024-12-13T05:55:57.853303461Z" level=error msg="Failed to destroy network for sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.858896 containerd[1496]: time="2024-12-13T05:55:57.853726882Z" level=error msg="encountered an error cleaning up failed sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.858896 containerd[1496]: time="2024-12-13T05:55:57.853774259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4j64,Uid:3c45b9e9-6ab2-4973-81f3-6dae0df5a18c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:57.859068 kubelet[2663]: E1213 05:55:57.856092 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k4j64_calico-system(3c45b9e9-6ab2-4973-81f3-6dae0df5a18c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k4j64_calico-system(3c45b9e9-6ab2-4973-81f3-6dae0df5a18c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k4j64" podUID="3c45b9e9-6ab2-4973-81f3-6dae0df5a18c" Dec 13 05:55:58.338936 containerd[1496]: time="2024-12-13T05:55:58.338867233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8447597-jg7hn,Uid:2ea0d766-8ed2-408a-bdc9-39aa78d68aef,Namespace:calico-apiserver,Attempt:0,}" Dec 13 05:55:58.346736 containerd[1496]: time="2024-12-13T05:55:58.346483006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8447597-nztwj,Uid:c4ca8aab-4bca-427a-b3f8-a2ed09442b71,Namespace:calico-apiserver,Attempt:0,}" Dec 13 05:55:58.428331 containerd[1496]: time="2024-12-13T05:55:58.428185097Z" level=error msg="Failed to destroy network for sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.428868 containerd[1496]: time="2024-12-13T05:55:58.428827596Z" level=error msg="encountered an error cleaning up failed sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.429167 containerd[1496]: time="2024-12-13T05:55:58.429051939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8447597-jg7hn,Uid:2ea0d766-8ed2-408a-bdc9-39aa78d68aef,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.429804 kubelet[2663]: E1213 05:55:58.429495 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.429804 kubelet[2663]: E1213 05:55:58.429574 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d8447597-jg7hn" Dec 13 05:55:58.429804 kubelet[2663]: E1213 05:55:58.429613 2663 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d8447597-jg7hn" Dec 13 05:55:58.430558 kubelet[2663]: E1213 05:55:58.429705 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d8447597-jg7hn_calico-apiserver(2ea0d766-8ed2-408a-bdc9-39aa78d68aef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d8447597-jg7hn_calico-apiserver(2ea0d766-8ed2-408a-bdc9-39aa78d68aef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d8447597-jg7hn" podUID="2ea0d766-8ed2-408a-bdc9-39aa78d68aef" Dec 13 05:55:58.458105 containerd[1496]: time="2024-12-13T05:55:58.457881676Z" level=error msg="Failed to destroy network for sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.458611 containerd[1496]: time="2024-12-13T05:55:58.458326821Z" level=error msg="encountered an error cleaning up failed sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.458611 containerd[1496]: time="2024-12-13T05:55:58.458392990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8447597-nztwj,Uid:c4ca8aab-4bca-427a-b3f8-a2ed09442b71,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.458816 kubelet[2663]: E1213 05:55:58.458759 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.459096 kubelet[2663]: E1213 05:55:58.458989 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d8447597-nztwj" Dec 13 05:55:58.459207 kubelet[2663]: E1213 05:55:58.459098 2663 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d8447597-nztwj" Dec 13 05:55:58.459776 kubelet[2663]: E1213 05:55:58.459434 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d8447597-nztwj_calico-apiserver(c4ca8aab-4bca-427a-b3f8-a2ed09442b71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d8447597-nztwj_calico-apiserver(c4ca8aab-4bca-427a-b3f8-a2ed09442b71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d8447597-nztwj" podUID="c4ca8aab-4bca-427a-b3f8-a2ed09442b71" Dec 13 05:55:58.713060 kubelet[2663]: I1213 05:55:58.712872 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:55:58.726402 kubelet[2663]: I1213 05:55:58.724730 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:55:58.726533 containerd[1496]: time="2024-12-13T05:55:58.726168155Z" level=info msg="StopPodSandbox for \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\"" Dec 13 05:55:58.728923 containerd[1496]: time="2024-12-13T05:55:58.728870682Z" level=info msg="Ensure that sandbox c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3 in task-service has been cleanup successfully" Dec 13 05:55:58.730023 containerd[1496]: time="2024-12-13T05:55:58.729950497Z" level=info msg="StopPodSandbox for \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\"" Dec 13 05:55:58.730248 containerd[1496]: time="2024-12-13T05:55:58.730216863Z" level=info msg="Ensure that sandbox 8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc in task-service has been cleanup successfully" Dec 13 05:55:58.732745 kubelet[2663]: I1213 05:55:58.732699 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:55:58.735074 containerd[1496]: time="2024-12-13T05:55:58.735023362Z" level=info msg="StopPodSandbox for \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\"" Dec 13 05:55:58.735515 containerd[1496]: time="2024-12-13T05:55:58.735247040Z" level=info msg="Ensure that sandbox c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3 in task-service has been cleanup successfully" Dec 13 05:55:58.740308 kubelet[2663]: I1213 05:55:58.740271 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:55:58.743487 containerd[1496]: time="2024-12-13T05:55:58.743395673Z" level=info msg="StopPodSandbox for \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\"" Dec 13 05:55:58.744850 containerd[1496]: time="2024-12-13T05:55:58.743639397Z" level=info msg="Ensure that sandbox 3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc in task-service has been cleanup successfully" Dec 13 05:55:58.748536 kubelet[2663]: I1213 05:55:58.747535 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:55:58.748987 containerd[1496]: time="2024-12-13T05:55:58.748781888Z" level=info msg="StopPodSandbox for \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\"" Dec 13 05:55:58.749763 containerd[1496]: time="2024-12-13T05:55:58.749458005Z" level=info msg="Ensure that sandbox 962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046 in task-service has been cleanup successfully" Dec 13 05:55:58.756561 kubelet[2663]: I1213 05:55:58.756527 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:55:58.759476 containerd[1496]: time="2024-12-13T05:55:58.759384256Z" level=info msg="StopPodSandbox for \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\"" Dec 13 05:55:58.762957 containerd[1496]: time="2024-12-13T05:55:58.762924718Z" level=info msg="Ensure that sandbox b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4 in task-service has been cleanup successfully" Dec 13 05:55:58.856531 containerd[1496]: time="2024-12-13T05:55:58.856410284Z" level=error msg="StopPodSandbox for \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\" failed" error="failed to destroy network for sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.856823 kubelet[2663]: E1213 05:55:58.856779 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:55:58.857323 kubelet[2663]: E1213 05:55:58.856859 2663 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc"} Dec 13 05:55:58.857323 kubelet[2663]: E1213 05:55:58.856954 2663 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c45b9e9-6ab2-4973-81f3-6dae0df5a18c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:55:58.857323 kubelet[2663]: E1213 05:55:58.856988 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c45b9e9-6ab2-4973-81f3-6dae0df5a18c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k4j64" podUID="3c45b9e9-6ab2-4973-81f3-6dae0df5a18c" Dec 13 05:55:58.900503 containerd[1496]: time="2024-12-13T05:55:58.900408085Z" level=error msg="StopPodSandbox for \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\" failed" error="failed to destroy network for sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.901628 kubelet[2663]: E1213 05:55:58.901263 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:55:58.901628 kubelet[2663]: E1213 05:55:58.901357 2663 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc"} Dec 13 05:55:58.901628 kubelet[2663]: E1213 05:55:58.901422 2663 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4ca8aab-4bca-427a-b3f8-a2ed09442b71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:55:58.901628 kubelet[2663]: E1213 05:55:58.901452 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4ca8aab-4bca-427a-b3f8-a2ed09442b71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d8447597-nztwj" podUID="c4ca8aab-4bca-427a-b3f8-a2ed09442b71" Dec 13 05:55:58.902989 containerd[1496]: time="2024-12-13T05:55:58.902044728Z" level=error msg="StopPodSandbox for \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\" failed" error="failed to destroy network for sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.903512 kubelet[2663]: E1213 05:55:58.903140 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:55:58.903512 kubelet[2663]: E1213 05:55:58.903185 2663 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4"} Dec 13 05:55:58.903512 kubelet[2663]: E1213 05:55:58.903234 2663 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ea0d766-8ed2-408a-bdc9-39aa78d68aef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:55:58.903512 kubelet[2663]: E1213 05:55:58.903284 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ea0d766-8ed2-408a-bdc9-39aa78d68aef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d8447597-jg7hn" podUID="2ea0d766-8ed2-408a-bdc9-39aa78d68aef" Dec 13 05:55:58.905709 containerd[1496]: time="2024-12-13T05:55:58.905419192Z" level=error msg="StopPodSandbox for \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\" failed" error="failed to destroy network for sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.905811 kubelet[2663]: E1213 05:55:58.905625 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:55:58.905811 kubelet[2663]: E1213 05:55:58.905674 2663 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3"} Dec 13 05:55:58.905811 kubelet[2663]: E1213 05:55:58.905712 2663 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b95cf7e-357f-49c3-a82d-c2fe116243f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:55:58.905811 kubelet[2663]: E1213 05:55:58.905739 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b95cf7e-357f-49c3-a82d-c2fe116243f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64d98ccd4f-zshqn" podUID="7b95cf7e-357f-49c3-a82d-c2fe116243f9" Dec 13 05:55:58.912070 containerd[1496]: time="2024-12-13T05:55:58.912027147Z" level=error msg="StopPodSandbox for \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\" failed" error="failed to destroy network for sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.912303 kubelet[2663]: E1213 05:55:58.912267 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:55:58.912411 kubelet[2663]: E1213 05:55:58.912311 2663 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046"} Dec 13 05:55:58.912411 kubelet[2663]: E1213 05:55:58.912352 2663 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bce5c938-2412-4bd8-bdc5-eb3ae91af6a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:55:58.912411 kubelet[2663]: E1213 05:55:58.912384 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bce5c938-2412-4bd8-bdc5-eb3ae91af6a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5dh4s" podUID="bce5c938-2412-4bd8-bdc5-eb3ae91af6a3" Dec 13 05:55:58.912709 kubelet[2663]: E1213 05:55:58.912621 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:55:58.912709 kubelet[2663]: E1213 05:55:58.912695 2663 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3"} Dec 13 05:55:58.912820 containerd[1496]: time="2024-12-13T05:55:58.912046347Z" level=error msg="StopPodSandbox for \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\" failed" error="failed to destroy network for sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:55:58.912886 kubelet[2663]: E1213 05:55:58.912747 2663 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1222d645-a9c3-4118-adcb-40dbd48ca62c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:55:58.912886 kubelet[2663]: E1213 05:55:58.912776 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1222d645-a9c3-4118-adcb-40dbd48ca62c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lqlzm" podUID="1222d645-a9c3-4118-adcb-40dbd48ca62c" Dec 13 05:56:07.608761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount497898656.mount: Deactivated successfully. Dec 13 05:56:07.737530 containerd[1496]: time="2024-12-13T05:56:07.699941758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 05:56:07.739938 containerd[1496]: time="2024-12-13T05:56:07.725444325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.005590623s" Dec 13 05:56:07.740170 containerd[1496]: time="2024-12-13T05:56:07.740110357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 05:56:07.748056 containerd[1496]: time="2024-12-13T05:56:07.747214067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:07.784222 containerd[1496]: time="2024-12-13T05:56:07.784157369Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:07.785199 containerd[1496]: time="2024-12-13T05:56:07.785159382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:07.806842 containerd[1496]: time="2024-12-13T05:56:07.806692252Z" level=info msg="CreateContainer within sandbox \"2e45d5be902bde6873aada64a5656871f7b4f91ba2f022fac938b4109664ce12\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 05:56:07.897031 containerd[1496]: time="2024-12-13T05:56:07.896905621Z" level=info msg="CreateContainer within sandbox \"2e45d5be902bde6873aada64a5656871f7b4f91ba2f022fac938b4109664ce12\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bed20b1df8ac993d354293142d178ef3a1cf8e405716150ff6003ec809d83e46\"" Dec 13 05:56:07.901383 containerd[1496]: time="2024-12-13T05:56:07.901351799Z" level=info msg="StartContainer for \"bed20b1df8ac993d354293142d178ef3a1cf8e405716150ff6003ec809d83e46\"" Dec 13 05:56:08.137545 systemd[1]: Started cri-containerd-bed20b1df8ac993d354293142d178ef3a1cf8e405716150ff6003ec809d83e46.scope - libcontainer container bed20b1df8ac993d354293142d178ef3a1cf8e405716150ff6003ec809d83e46. Dec 13 05:56:08.246134 containerd[1496]: time="2024-12-13T05:56:08.245788856Z" level=info msg="StartContainer for \"bed20b1df8ac993d354293142d178ef3a1cf8e405716150ff6003ec809d83e46\" returns successfully" Dec 13 05:56:08.428611 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 05:56:08.430159 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 05:56:08.865863 kubelet[2663]: I1213 05:56:08.857867 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qt95j" podStartSLOduration=1.818294538 podStartE2EDuration="25.832619883s" podCreationTimestamp="2024-12-13 05:55:43 +0000 UTC" firstStartedPulling="2024-12-13 05:55:43.7272703 +0000 UTC m=+15.354554808" lastFinishedPulling="2024-12-13 05:56:07.741595643 +0000 UTC m=+39.368880153" observedRunningTime="2024-12-13 05:56:08.827814706 +0000 UTC m=+40.455099229" watchObservedRunningTime="2024-12-13 05:56:08.832619883 +0000 UTC m=+40.459904402" Dec 13 05:56:09.900683 systemd[1]: run-containerd-runc-k8s.io-bed20b1df8ac993d354293142d178ef3a1cf8e405716150ff6003ec809d83e46-runc.O49s7S.mount: Deactivated successfully. Dec 13 05:56:10.453248 kernel: bpftool[3951]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 05:56:10.526707 containerd[1496]: time="2024-12-13T05:56:10.526641444Z" level=info msg="StopPodSandbox for \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\"" Dec 13 05:56:10.530269 containerd[1496]: time="2024-12-13T05:56:10.530234648Z" level=info msg="StopPodSandbox for \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\"" Dec 13 05:56:10.882987 systemd[1]: run-containerd-runc-k8s.io-bed20b1df8ac993d354293142d178ef3a1cf8e405716150ff6003ec809d83e46-runc.ImNuAL.mount: Deactivated successfully. Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.685 [INFO][3995] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.685 [INFO][3995] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" iface="eth0" netns="/var/run/netns/cni-54b82af4-fb18-43db-524a-922729211ec3" Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.686 [INFO][3995] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" iface="eth0" netns="/var/run/netns/cni-54b82af4-fb18-43db-524a-922729211ec3" Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.687 [INFO][3995] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" iface="eth0" netns="/var/run/netns/cni-54b82af4-fb18-43db-524a-922729211ec3" Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.687 [INFO][3995] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.687 [INFO][3995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.919 [INFO][4007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" HandleID="k8s-pod-network.c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.922 [INFO][4007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.922 [INFO][4007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.943 [WARNING][4007] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" HandleID="k8s-pod-network.c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.943 [INFO][4007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" HandleID="k8s-pod-network.c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.946 [INFO][4007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:10.955590 containerd[1496]: 2024-12-13 05:56:10.950 [INFO][3995] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:10.966327 systemd[1]: run-netns-cni\x2d54b82af4\x2dfb18\x2d43db\x2d524a\x2d922729211ec3.mount: Deactivated successfully. Dec 13 05:56:10.979452 containerd[1496]: time="2024-12-13T05:56:10.979150134Z" level=info msg="TearDown network for sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\" successfully" Dec 13 05:56:10.979452 containerd[1496]: time="2024-12-13T05:56:10.979421572Z" level=info msg="StopPodSandbox for \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\" returns successfully" Dec 13 05:56:10.992251 systemd-networkd[1415]: vxlan.calico: Link UP Dec 13 05:56:10.995399 containerd[1496]: time="2024-12-13T05:56:10.992455777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d98ccd4f-zshqn,Uid:7b95cf7e-357f-49c3-a82d-c2fe116243f9,Namespace:calico-system,Attempt:1,}" Dec 13 05:56:10.992264 systemd-networkd[1415]: vxlan.calico: Gained carrier Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.687 [INFO][3994] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.688 [INFO][3994] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" iface="eth0" netns="/var/run/netns/cni-d3cae0fb-191a-b89c-81ae-1c61f9c58db2" Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.688 [INFO][3994] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" iface="eth0" netns="/var/run/netns/cni-d3cae0fb-191a-b89c-81ae-1c61f9c58db2" Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.689 [INFO][3994] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" iface="eth0" netns="/var/run/netns/cni-d3cae0fb-191a-b89c-81ae-1c61f9c58db2" Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.689 [INFO][3994] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.689 [INFO][3994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.919 [INFO][4008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" HandleID="k8s-pod-network.3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.922 [INFO][4008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.946 [INFO][4008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.963 [WARNING][4008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" HandleID="k8s-pod-network.3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.965 [INFO][4008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" HandleID="k8s-pod-network.3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.969 [INFO][4008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:10.998408 containerd[1496]: 2024-12-13 05:56:10.986 [INFO][3994] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:11.000080 containerd[1496]: time="2024-12-13T05:56:10.999789052Z" level=info msg="TearDown network for sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\" successfully" Dec 13 05:56:11.000080 containerd[1496]: time="2024-12-13T05:56:10.999816744Z" level=info msg="StopPodSandbox for \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\" returns successfully" Dec 13 05:56:11.005304 containerd[1496]: time="2024-12-13T05:56:11.005262384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8447597-nztwj,Uid:c4ca8aab-4bca-427a-b3f8-a2ed09442b71,Namespace:calico-apiserver,Attempt:1,}" Dec 13 05:56:11.010608 systemd[1]: run-netns-cni\x2dd3cae0fb\x2d191a\x2db89c\x2d81ae\x2d1c61f9c58db2.mount: Deactivated successfully. Dec 13 05:56:11.414320 systemd-networkd[1415]: cali3450b8998f0: Link UP Dec 13 05:56:11.415031 systemd-networkd[1415]: cali3450b8998f0: Gained carrier Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.252 [INFO][4071] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0 calico-apiserver-78d8447597- calico-apiserver c4ca8aab-4bca-427a-b3f8-a2ed09442b71 749 0 2024-12-13 05:55:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78d8447597 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-kh3sk.gb1.brightbox.com calico-apiserver-78d8447597-nztwj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3450b8998f0 [] []}} ContainerID="e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-nztwj" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.252 [INFO][4071] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-nztwj" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.335 [INFO][4108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" HandleID="k8s-pod-network.e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.356 [INFO][4108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" HandleID="k8s-pod-network.e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-kh3sk.gb1.brightbox.com", "pod":"calico-apiserver-78d8447597-nztwj", "timestamp":"2024-12-13 05:56:11.335720904 +0000 UTC"}, Hostname:"srv-kh3sk.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.356 [INFO][4108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.356 [INFO][4108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.356 [INFO][4108] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-kh3sk.gb1.brightbox.com' Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.359 [INFO][4108] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.374 [INFO][4108] ipam/ipam.go 372: Looking up existing affinities for host host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.381 [INFO][4108] ipam/ipam.go 489: Trying affinity for 192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.383 [INFO][4108] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.387 [INFO][4108] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.387 [INFO][4108] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.389 [INFO][4108] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.396 [INFO][4108] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.404 [INFO][4108] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.65/26] block=192.168.99.64/26 handle="k8s-pod-network.e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.404 [INFO][4108] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.65/26] handle="k8s-pod-network.e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.404 [INFO][4108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:11.444936 containerd[1496]: 2024-12-13 05:56:11.404 [INFO][4108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.65/26] IPv6=[] ContainerID="e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" HandleID="k8s-pod-network.e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:11.446380 containerd[1496]: 2024-12-13 05:56:11.410 [INFO][4071] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-nztwj" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0", GenerateName:"calico-apiserver-78d8447597-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4ca8aab-4bca-427a-b3f8-a2ed09442b71", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8447597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-78d8447597-nztwj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3450b8998f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:11.446380 containerd[1496]: 2024-12-13 05:56:11.410 [INFO][4071] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.65/32] ContainerID="e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-nztwj" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:11.446380 containerd[1496]: 2024-12-13 05:56:11.410 [INFO][4071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3450b8998f0 ContainerID="e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-nztwj" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:11.446380 containerd[1496]: 2024-12-13 05:56:11.416 [INFO][4071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-nztwj" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:11.446380 containerd[1496]: 2024-12-13 05:56:11.418 [INFO][4071] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-nztwj" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0", GenerateName:"calico-apiserver-78d8447597-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4ca8aab-4bca-427a-b3f8-a2ed09442b71", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8447597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef", Pod:"calico-apiserver-78d8447597-nztwj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3450b8998f0", MAC:"f2:5a:93:06:01:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:11.446380 containerd[1496]: 2024-12-13 05:56:11.436 [INFO][4071] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-nztwj" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:11.538234 containerd[1496]: time="2024-12-13T05:56:11.535349200Z" level=info msg="StopPodSandbox for \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\"" Dec 13 05:56:11.550402 systemd-networkd[1415]: calibfd924424f9: Link UP Dec 13 05:56:11.550756 systemd-networkd[1415]: calibfd924424f9: Gained carrier Dec 13 05:56:11.566423 containerd[1496]: time="2024-12-13T05:56:11.565628680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:56:11.579514 containerd[1496]: time="2024-12-13T05:56:11.579361418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:56:11.579514 containerd[1496]: time="2024-12-13T05:56:11.579432000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:11.580101 containerd[1496]: time="2024-12-13T05:56:11.580016103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.271 [INFO][4069] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0 calico-kube-controllers-64d98ccd4f- calico-system 7b95cf7e-357f-49c3-a82d-c2fe116243f9 748 0 2024-12-13 05:55:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64d98ccd4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-kh3sk.gb1.brightbox.com calico-kube-controllers-64d98ccd4f-zshqn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibfd924424f9 [] []}} ContainerID="9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" Namespace="calico-system" Pod="calico-kube-controllers-64d98ccd4f-zshqn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.272 [INFO][4069] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" Namespace="calico-system" Pod="calico-kube-controllers-64d98ccd4f-zshqn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.360 [INFO][4112] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" HandleID="k8s-pod-network.9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.375 [INFO][4112] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" HandleID="k8s-pod-network.9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003182a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-kh3sk.gb1.brightbox.com", "pod":"calico-kube-controllers-64d98ccd4f-zshqn", "timestamp":"2024-12-13 05:56:11.360476378 +0000 UTC"}, Hostname:"srv-kh3sk.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.375 [INFO][4112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.404 [INFO][4112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.404 [INFO][4112] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-kh3sk.gb1.brightbox.com' Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.462 [INFO][4112] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.474 [INFO][4112] ipam/ipam.go 372: Looking up existing affinities for host host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.485 [INFO][4112] ipam/ipam.go 489: Trying affinity for 192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.490 [INFO][4112] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.494 [INFO][4112] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.494 [INFO][4112] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.497 [INFO][4112] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.507 [INFO][4112] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.521 [INFO][4112] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.66/26] block=192.168.99.64/26 handle="k8s-pod-network.9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.521 [INFO][4112] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.66/26] handle="k8s-pod-network.9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.522 [INFO][4112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:11.597724 containerd[1496]: 2024-12-13 05:56:11.522 [INFO][4112] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.66/26] IPv6=[] ContainerID="9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" HandleID="k8s-pod-network.9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:11.599953 containerd[1496]: 2024-12-13 05:56:11.541 [INFO][4069] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" Namespace="calico-system" Pod="calico-kube-controllers-64d98ccd4f-zshqn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0", GenerateName:"calico-kube-controllers-64d98ccd4f-", Namespace:"calico-system", SelfLink:"", UID:"7b95cf7e-357f-49c3-a82d-c2fe116243f9", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d98ccd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-64d98ccd4f-zshqn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfd924424f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:11.599953 containerd[1496]: 2024-12-13 05:56:11.541 [INFO][4069] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.66/32] ContainerID="9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" Namespace="calico-system" Pod="calico-kube-controllers-64d98ccd4f-zshqn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:11.599953 containerd[1496]: 2024-12-13 05:56:11.542 [INFO][4069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfd924424f9 ContainerID="9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" Namespace="calico-system" Pod="calico-kube-controllers-64d98ccd4f-zshqn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:11.599953 containerd[1496]: 2024-12-13 05:56:11.550 [INFO][4069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" Namespace="calico-system" Pod="calico-kube-controllers-64d98ccd4f-zshqn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:11.599953 containerd[1496]: 2024-12-13 05:56:11.556 [INFO][4069] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" Namespace="calico-system" Pod="calico-kube-controllers-64d98ccd4f-zshqn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0", GenerateName:"calico-kube-controllers-64d98ccd4f-", Namespace:"calico-system", SelfLink:"", UID:"7b95cf7e-357f-49c3-a82d-c2fe116243f9", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d98ccd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a", Pod:"calico-kube-controllers-64d98ccd4f-zshqn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfd924424f9", MAC:"82:ef:03:9a:72:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:11.599953 containerd[1496]: 2024-12-13 05:56:11.585 [INFO][4069] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a" Namespace="calico-system" Pod="calico-kube-controllers-64d98ccd4f-zshqn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:11.652214 systemd[1]: Started cri-containerd-e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef.scope - libcontainer container e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef. Dec 13 05:56:11.730170 containerd[1496]: time="2024-12-13T05:56:11.729899464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:56:11.731886 containerd[1496]: time="2024-12-13T05:56:11.731320379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:56:11.731886 containerd[1496]: time="2024-12-13T05:56:11.731555324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:11.731886 containerd[1496]: time="2024-12-13T05:56:11.731769239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:11.799434 systemd[1]: Started cri-containerd-9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a.scope - libcontainer container 9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a. Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.783 [INFO][4162] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.787 [INFO][4162] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" iface="eth0" netns="/var/run/netns/cni-0e53aa96-11f8-dd15-c5bf-7f825a5f8c21" Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.797 [INFO][4162] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" iface="eth0" netns="/var/run/netns/cni-0e53aa96-11f8-dd15-c5bf-7f825a5f8c21" Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.801 [INFO][4162] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" iface="eth0" netns="/var/run/netns/cni-0e53aa96-11f8-dd15-c5bf-7f825a5f8c21" Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.802 [INFO][4162] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.802 [INFO][4162] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.853 [INFO][4228] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" HandleID="k8s-pod-network.c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.854 [INFO][4228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.857 [INFO][4228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.876 [WARNING][4228] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" HandleID="k8s-pod-network.c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.876 [INFO][4228] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" HandleID="k8s-pod-network.c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.881 [INFO][4228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:11.895188 containerd[1496]: 2024-12-13 05:56:11.888 [INFO][4162] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:11.898002 containerd[1496]: time="2024-12-13T05:56:11.895355455Z" level=info msg="TearDown network for sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\" successfully" Dec 13 05:56:11.898002 containerd[1496]: time="2024-12-13T05:56:11.895392541Z" level=info msg="StopPodSandbox for \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\" returns successfully" Dec 13 05:56:11.898002 containerd[1496]: time="2024-12-13T05:56:11.897689843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lqlzm,Uid:1222d645-a9c3-4118-adcb-40dbd48ca62c,Namespace:kube-system,Attempt:1,}" Dec 13 05:56:11.942739 containerd[1496]: time="2024-12-13T05:56:11.942578544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8447597-nztwj,Uid:c4ca8aab-4bca-427a-b3f8-a2ed09442b71,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef\"" Dec 13 05:56:11.948985 containerd[1496]: time="2024-12-13T05:56:11.948626655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 05:56:11.975748 systemd[1]: run-netns-cni\x2d0e53aa96\x2d11f8\x2ddd15\x2dc5bf\x2d7f825a5f8c21.mount: Deactivated successfully. Dec 13 05:56:12.063212 containerd[1496]: time="2024-12-13T05:56:12.063146460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d98ccd4f-zshqn,Uid:7b95cf7e-357f-49c3-a82d-c2fe116243f9,Namespace:calico-system,Attempt:1,} returns sandbox id \"9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a\"" Dec 13 05:56:12.203307 systemd-networkd[1415]: calif0f8ec0edf7: Link UP Dec 13 05:56:12.205841 systemd-networkd[1415]: calif0f8ec0edf7: Gained carrier Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.044 [INFO][4258] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0 coredns-6f6b679f8f- kube-system 1222d645-a9c3-4118-adcb-40dbd48ca62c 759 0 2024-12-13 05:55:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-kh3sk.gb1.brightbox.com coredns-6f6b679f8f-lqlzm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0f8ec0edf7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" Namespace="kube-system" Pod="coredns-6f6b679f8f-lqlzm" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.044 [INFO][4258] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" Namespace="kube-system" Pod="coredns-6f6b679f8f-lqlzm" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.118 [INFO][4289] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" HandleID="k8s-pod-network.11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.138 [INFO][4289] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" HandleID="k8s-pod-network.11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051920), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-kh3sk.gb1.brightbox.com", "pod":"coredns-6f6b679f8f-lqlzm", "timestamp":"2024-12-13 05:56:12.118603362 +0000 UTC"}, Hostname:"srv-kh3sk.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.138 [INFO][4289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.138 [INFO][4289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.138 [INFO][4289] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-kh3sk.gb1.brightbox.com' Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.143 [INFO][4289] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.160 [INFO][4289] ipam/ipam.go 372: Looking up existing affinities for host host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.167 [INFO][4289] ipam/ipam.go 489: Trying affinity for 192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.169 [INFO][4289] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.180 [INFO][4289] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.180 [INFO][4289] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.182 [INFO][4289] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8 Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.188 [INFO][4289] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.196 [INFO][4289] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.67/26] block=192.168.99.64/26 handle="k8s-pod-network.11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.196 [INFO][4289] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.67/26] handle="k8s-pod-network.11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.196 [INFO][4289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:12.232393 containerd[1496]: 2024-12-13 05:56:12.196 [INFO][4289] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.67/26] IPv6=[] ContainerID="11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" HandleID="k8s-pod-network.11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:12.233904 containerd[1496]: 2024-12-13 05:56:12.199 [INFO][4258] cni-plugin/k8s.go 386: Populated endpoint ContainerID="11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" Namespace="kube-system" Pod="coredns-6f6b679f8f-lqlzm" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1222d645-a9c3-4118-adcb-40dbd48ca62c", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"", Pod:"coredns-6f6b679f8f-lqlzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0f8ec0edf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:12.233904 containerd[1496]: 2024-12-13 05:56:12.199 [INFO][4258] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.67/32] ContainerID="11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" Namespace="kube-system" Pod="coredns-6f6b679f8f-lqlzm" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:12.233904 containerd[1496]: 2024-12-13 05:56:12.199 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0f8ec0edf7 ContainerID="11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" Namespace="kube-system" Pod="coredns-6f6b679f8f-lqlzm" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:12.233904 containerd[1496]: 2024-12-13 05:56:12.207 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" Namespace="kube-system" Pod="coredns-6f6b679f8f-lqlzm" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:12.233904 containerd[1496]: 2024-12-13 05:56:12.209 [INFO][4258] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" Namespace="kube-system" Pod="coredns-6f6b679f8f-lqlzm" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1222d645-a9c3-4118-adcb-40dbd48ca62c", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8", Pod:"coredns-6f6b679f8f-lqlzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0f8ec0edf7", MAC:"a6:bc:7c:15:a3:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:12.233904 containerd[1496]: 2024-12-13 05:56:12.228 [INFO][4258] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8" Namespace="kube-system" Pod="coredns-6f6b679f8f-lqlzm" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:12.264488 containerd[1496]: time="2024-12-13T05:56:12.264384924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:56:12.264819 containerd[1496]: time="2024-12-13T05:56:12.264468426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:56:12.264927 containerd[1496]: time="2024-12-13T05:56:12.264874905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:12.265243 containerd[1496]: time="2024-12-13T05:56:12.265151695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:12.302210 systemd[1]: run-containerd-runc-k8s.io-11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8-runc.iWWHWF.mount: Deactivated successfully. Dec 13 05:56:12.316327 systemd[1]: Started cri-containerd-11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8.scope - libcontainer container 11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8. Dec 13 05:56:12.382612 containerd[1496]: time="2024-12-13T05:56:12.382534838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lqlzm,Uid:1222d645-a9c3-4118-adcb-40dbd48ca62c,Namespace:kube-system,Attempt:1,} returns sandbox id \"11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8\"" Dec 13 05:56:12.390941 containerd[1496]: time="2024-12-13T05:56:12.390894681Z" level=info msg="CreateContainer within sandbox \"11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 05:56:12.415825 containerd[1496]: time="2024-12-13T05:56:12.415772964Z" level=info msg="CreateContainer within sandbox \"11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84ab1dc81455ed62f75c1fefc1b6acecc42f43a141f5d760087bb6ca67b7ea3a\"" Dec 13 05:56:12.426260 containerd[1496]: time="2024-12-13T05:56:12.425295908Z" level=info msg="StartContainer for \"84ab1dc81455ed62f75c1fefc1b6acecc42f43a141f5d760087bb6ca67b7ea3a\"" Dec 13 05:56:12.468295 systemd[1]: Started cri-containerd-84ab1dc81455ed62f75c1fefc1b6acecc42f43a141f5d760087bb6ca67b7ea3a.scope - libcontainer container 84ab1dc81455ed62f75c1fefc1b6acecc42f43a141f5d760087bb6ca67b7ea3a. Dec 13 05:56:12.515408 containerd[1496]: time="2024-12-13T05:56:12.515347845Z" level=info msg="StartContainer for \"84ab1dc81455ed62f75c1fefc1b6acecc42f43a141f5d760087bb6ca67b7ea3a\" returns successfully" Dec 13 05:56:12.525337 systemd-networkd[1415]: cali3450b8998f0: Gained IPv6LL Dec 13 05:56:12.527930 containerd[1496]: time="2024-12-13T05:56:12.527696239Z" level=info msg="StopPodSandbox for \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\"" Dec 13 05:56:12.529855 containerd[1496]: time="2024-12-13T05:56:12.529778147Z" level=info msg="StopPodSandbox for \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\"" Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.642 [INFO][4413] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.642 [INFO][4413] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" iface="eth0" netns="/var/run/netns/cni-d8a6ec13-ba3b-166a-548b-b113d2b463c5" Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.644 [INFO][4413] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" iface="eth0" netns="/var/run/netns/cni-d8a6ec13-ba3b-166a-548b-b113d2b463c5" Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.645 [INFO][4413] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" iface="eth0" netns="/var/run/netns/cni-d8a6ec13-ba3b-166a-548b-b113d2b463c5" Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.645 [INFO][4413] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.645 [INFO][4413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.697 [INFO][4435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" HandleID="k8s-pod-network.962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.698 [INFO][4435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.698 [INFO][4435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.714 [WARNING][4435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" HandleID="k8s-pod-network.962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.714 [INFO][4435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" HandleID="k8s-pod-network.962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.716 [INFO][4435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:12.722375 containerd[1496]: 2024-12-13 05:56:12.718 [INFO][4413] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:12.725849 containerd[1496]: time="2024-12-13T05:56:12.723223765Z" level=info msg="TearDown network for sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\" successfully" Dec 13 05:56:12.725849 containerd[1496]: time="2024-12-13T05:56:12.723258658Z" level=info msg="StopPodSandbox for \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\" returns successfully" Dec 13 05:56:12.725849 containerd[1496]: time="2024-12-13T05:56:12.724729371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5dh4s,Uid:bce5c938-2412-4bd8-bdc5-eb3ae91af6a3,Namespace:kube-system,Attempt:1,}" Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.626 [INFO][4414] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.627 [INFO][4414] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" iface="eth0" netns="/var/run/netns/cni-c7181d4a-cdba-a4d8-d31e-e38c23f1d7b5" Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.628 [INFO][4414] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" iface="eth0" netns="/var/run/netns/cni-c7181d4a-cdba-a4d8-d31e-e38c23f1d7b5" Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.628 [INFO][4414] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" iface="eth0" netns="/var/run/netns/cni-c7181d4a-cdba-a4d8-d31e-e38c23f1d7b5" Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.629 [INFO][4414] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.629 [INFO][4414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.713 [INFO][4431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" HandleID="k8s-pod-network.8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.713 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.716 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.725 [WARNING][4431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" HandleID="k8s-pod-network.8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.725 [INFO][4431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" HandleID="k8s-pod-network.8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.728 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:12.731296 containerd[1496]: 2024-12-13 05:56:12.730 [INFO][4414] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:12.733596 containerd[1496]: time="2024-12-13T05:56:12.731691388Z" level=info msg="TearDown network for sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\" successfully" Dec 13 05:56:12.733596 containerd[1496]: time="2024-12-13T05:56:12.731717329Z" level=info msg="StopPodSandbox for \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\" returns successfully" Dec 13 05:56:12.733596 containerd[1496]: time="2024-12-13T05:56:12.732811907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4j64,Uid:3c45b9e9-6ab2-4973-81f3-6dae0df5a18c,Namespace:calico-system,Attempt:1,}" Dec 13 05:56:12.846840 systemd-networkd[1415]: vxlan.calico: Gained IPv6LL Dec 13 05:56:12.934055 kubelet[2663]: I1213 05:56:12.933870 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lqlzm" podStartSLOduration=38.933834272 podStartE2EDuration="38.933834272s" podCreationTimestamp="2024-12-13 05:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:56:12.90382647 +0000 UTC m=+44.531111006" watchObservedRunningTime="2024-12-13 05:56:12.933834272 +0000 UTC m=+44.561118791" Dec 13 05:56:12.983010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567885144.mount: Deactivated successfully. Dec 13 05:56:12.983395 systemd[1]: run-netns-cni\x2dc7181d4a\x2dcdba\x2da4d8\x2dd31e\x2de38c23f1d7b5.mount: Deactivated successfully. Dec 13 05:56:12.983517 systemd[1]: run-netns-cni\x2dd8a6ec13\x2dba3b\x2d166a\x2d548b\x2db113d2b463c5.mount: Deactivated successfully. Dec 13 05:56:13.037412 systemd-networkd[1415]: calibfd924424f9: Gained IPv6LL Dec 13 05:56:13.120827 systemd-networkd[1415]: cali97355072a74: Link UP Dec 13 05:56:13.121762 systemd-networkd[1415]: cali97355072a74: Gained carrier Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:12.811 [INFO][4444] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0 coredns-6f6b679f8f- kube-system bce5c938-2412-4bd8-bdc5-eb3ae91af6a3 773 0 2024-12-13 05:55:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-kh3sk.gb1.brightbox.com coredns-6f6b679f8f-5dh4s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali97355072a74 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5dh4s" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:12.812 [INFO][4444] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5dh4s" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:12.941 [INFO][4468] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" HandleID="k8s-pod-network.03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.075 [INFO][4468] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" HandleID="k8s-pod-network.03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c3af0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-kh3sk.gb1.brightbox.com", "pod":"coredns-6f6b679f8f-5dh4s", "timestamp":"2024-12-13 05:56:12.941670412 +0000 UTC"}, Hostname:"srv-kh3sk.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.076 [INFO][4468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.076 [INFO][4468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.076 [INFO][4468] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-kh3sk.gb1.brightbox.com' Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.079 [INFO][4468] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.085 [INFO][4468] ipam/ipam.go 372: Looking up existing affinities for host host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.091 [INFO][4468] ipam/ipam.go 489: Trying affinity for 192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.094 [INFO][4468] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.097 [INFO][4468] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.097 [INFO][4468] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.099 [INFO][4468] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8 Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.104 [INFO][4468] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.112 [INFO][4468] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.68/26] block=192.168.99.64/26 handle="k8s-pod-network.03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.112 [INFO][4468] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.68/26] handle="k8s-pod-network.03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.112 [INFO][4468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:13.144876 containerd[1496]: 2024-12-13 05:56:13.112 [INFO][4468] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.68/26] IPv6=[] ContainerID="03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" HandleID="k8s-pod-network.03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:13.147700 containerd[1496]: 2024-12-13 05:56:13.116 [INFO][4444] cni-plugin/k8s.go 386: Populated endpoint ContainerID="03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5dh4s" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bce5c938-2412-4bd8-bdc5-eb3ae91af6a3", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"", Pod:"coredns-6f6b679f8f-5dh4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97355072a74", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:13.147700 containerd[1496]: 2024-12-13 05:56:13.116 [INFO][4444] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.68/32] ContainerID="03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5dh4s" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:13.147700 containerd[1496]: 2024-12-13 05:56:13.116 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97355072a74 ContainerID="03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5dh4s" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:13.147700 containerd[1496]: 2024-12-13 05:56:13.120 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5dh4s" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:13.147700 containerd[1496]: 2024-12-13 05:56:13.122 [INFO][4444] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5dh4s" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bce5c938-2412-4bd8-bdc5-eb3ae91af6a3", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8", Pod:"coredns-6f6b679f8f-5dh4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97355072a74", MAC:"0a:e6:17:5c:fd:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:13.147700 containerd[1496]: 2024-12-13 05:56:13.142 [INFO][4444] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8" Namespace="kube-system" Pod="coredns-6f6b679f8f-5dh4s" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:13.200659 containerd[1496]: time="2024-12-13T05:56:13.199672635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:56:13.200659 containerd[1496]: time="2024-12-13T05:56:13.199787755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:56:13.200659 containerd[1496]: time="2024-12-13T05:56:13.199814119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:13.201412 containerd[1496]: time="2024-12-13T05:56:13.201321737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:13.250278 systemd[1]: run-containerd-runc-k8s.io-03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8-runc.0d468N.mount: Deactivated successfully. Dec 13 05:56:13.266926 systemd-networkd[1415]: calibd042bd2073: Link UP Dec 13 05:56:13.268397 systemd[1]: Started cri-containerd-03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8.scope - libcontainer container 03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8. Dec 13 05:56:13.269654 systemd-networkd[1415]: calibd042bd2073: Gained carrier Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:12.887 [INFO][4453] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0 csi-node-driver- calico-system 3c45b9e9-6ab2-4973-81f3-6dae0df5a18c 772 0 2024-12-13 05:55:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-kh3sk.gb1.brightbox.com csi-node-driver-k4j64 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibd042bd2073 [] []}} ContainerID="8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" Namespace="calico-system" Pod="csi-node-driver-k4j64" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:12.887 [INFO][4453] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" Namespace="calico-system" Pod="csi-node-driver-k4j64" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:12.996 [INFO][4474] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" HandleID="k8s-pod-network.8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.078 [INFO][4474] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" HandleID="k8s-pod-network.8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bd7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-kh3sk.gb1.brightbox.com", "pod":"csi-node-driver-k4j64", "timestamp":"2024-12-13 05:56:12.996467396 +0000 UTC"}, Hostname:"srv-kh3sk.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.078 [INFO][4474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.112 [INFO][4474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.113 [INFO][4474] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-kh3sk.gb1.brightbox.com' Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.180 [INFO][4474] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.188 [INFO][4474] ipam/ipam.go 372: Looking up existing affinities for host host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.195 [INFO][4474] ipam/ipam.go 489: Trying affinity for 192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.199 [INFO][4474] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.205 [INFO][4474] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.205 [INFO][4474] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.219 [INFO][4474] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.225 [INFO][4474] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.240 [INFO][4474] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.69/26] block=192.168.99.64/26 handle="k8s-pod-network.8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.240 [INFO][4474] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.69/26] handle="k8s-pod-network.8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.240 [INFO][4474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:13.308990 containerd[1496]: 2024-12-13 05:56:13.240 [INFO][4474] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.69/26] IPv6=[] ContainerID="8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" HandleID="k8s-pod-network.8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:13.311812 containerd[1496]: 2024-12-13 05:56:13.254 [INFO][4453] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" Namespace="calico-system" Pod="csi-node-driver-k4j64" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c45b9e9-6ab2-4973-81f3-6dae0df5a18c", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-k4j64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd042bd2073", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:13.311812 containerd[1496]: 2024-12-13 05:56:13.254 [INFO][4453] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.69/32] ContainerID="8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" Namespace="calico-system" Pod="csi-node-driver-k4j64" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:13.311812 containerd[1496]: 2024-12-13 05:56:13.254 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd042bd2073 ContainerID="8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" Namespace="calico-system" Pod="csi-node-driver-k4j64" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:13.311812 containerd[1496]: 2024-12-13 05:56:13.273 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" Namespace="calico-system" Pod="csi-node-driver-k4j64" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:13.311812 containerd[1496]: 2024-12-13 05:56:13.274 [INFO][4453] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" Namespace="calico-system" Pod="csi-node-driver-k4j64" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c45b9e9-6ab2-4973-81f3-6dae0df5a18c", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac", Pod:"csi-node-driver-k4j64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd042bd2073", MAC:"42:03:8d:31:be:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:13.311812 containerd[1496]: 2024-12-13 05:56:13.304 [INFO][4453] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac" Namespace="calico-system" Pod="csi-node-driver-k4j64" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:13.380027 containerd[1496]: time="2024-12-13T05:56:13.377396453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:56:13.380027 containerd[1496]: time="2024-12-13T05:56:13.377498123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:56:13.380027 containerd[1496]: time="2024-12-13T05:56:13.377521833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:13.380027 containerd[1496]: time="2024-12-13T05:56:13.377666870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:13.421186 containerd[1496]: time="2024-12-13T05:56:13.419614131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5dh4s,Uid:bce5c938-2412-4bd8-bdc5-eb3ae91af6a3,Namespace:kube-system,Attempt:1,} returns sandbox id \"03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8\"" Dec 13 05:56:13.431170 containerd[1496]: time="2024-12-13T05:56:13.430443422Z" level=info msg="CreateContainer within sandbox \"03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 05:56:13.435577 systemd[1]: Started cri-containerd-8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac.scope - libcontainer container 8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac. Dec 13 05:56:13.469641 containerd[1496]: time="2024-12-13T05:56:13.469171328Z" level=info msg="CreateContainer within sandbox \"03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27d7c22455fd4a8411ce9ef4cc00b2b26539cce7aa80a8f297fbee3cf35291cc\"" Dec 13 05:56:13.481171 containerd[1496]: time="2024-12-13T05:56:13.480701016Z" level=info msg="StartContainer for \"27d7c22455fd4a8411ce9ef4cc00b2b26539cce7aa80a8f297fbee3cf35291cc\"" Dec 13 05:56:13.579390 systemd[1]: Started cri-containerd-27d7c22455fd4a8411ce9ef4cc00b2b26539cce7aa80a8f297fbee3cf35291cc.scope - libcontainer container 27d7c22455fd4a8411ce9ef4cc00b2b26539cce7aa80a8f297fbee3cf35291cc. Dec 13 05:56:13.581256 containerd[1496]: time="2024-12-13T05:56:13.580606467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4j64,Uid:3c45b9e9-6ab2-4973-81f3-6dae0df5a18c,Namespace:calico-system,Attempt:1,} returns sandbox id \"8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac\"" Dec 13 05:56:13.670195 containerd[1496]: time="2024-12-13T05:56:13.669794397Z" level=info msg="StartContainer for \"27d7c22455fd4a8411ce9ef4cc00b2b26539cce7aa80a8f297fbee3cf35291cc\" returns successfully" Dec 13 05:56:13.961030 kubelet[2663]: I1213 05:56:13.960717 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5dh4s" podStartSLOduration=39.960693694 podStartE2EDuration="39.960693694s" podCreationTimestamp="2024-12-13 05:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:56:13.93004591 +0000 UTC m=+45.557330438" watchObservedRunningTime="2024-12-13 05:56:13.960693694 +0000 UTC m=+45.587978222" Dec 13 05:56:13.997394 systemd-networkd[1415]: calif0f8ec0edf7: Gained IPv6LL Dec 13 05:56:14.445355 systemd-networkd[1415]: calibd042bd2073: Gained IPv6LL Dec 13 05:56:14.511674 systemd-networkd[1415]: cali97355072a74: Gained IPv6LL Dec 13 05:56:14.527724 containerd[1496]: time="2024-12-13T05:56:14.527228223Z" level=info msg="StopPodSandbox for \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\"" Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.639 [INFO][4655] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.640 [INFO][4655] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" iface="eth0" netns="/var/run/netns/cni-2a2ceceb-e245-4ba8-1e17-39e9215a6c33" Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.641 [INFO][4655] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" iface="eth0" netns="/var/run/netns/cni-2a2ceceb-e245-4ba8-1e17-39e9215a6c33" Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.642 [INFO][4655] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" iface="eth0" netns="/var/run/netns/cni-2a2ceceb-e245-4ba8-1e17-39e9215a6c33" Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.642 [INFO][4655] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.642 [INFO][4655] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.692 [INFO][4661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" HandleID="k8s-pod-network.b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.692 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.692 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.705 [WARNING][4661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" HandleID="k8s-pod-network.b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.705 [INFO][4661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" HandleID="k8s-pod-network.b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.707 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:14.712213 containerd[1496]: 2024-12-13 05:56:14.709 [INFO][4655] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:14.718471 containerd[1496]: time="2024-12-13T05:56:14.712194557Z" level=info msg="TearDown network for sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\" successfully" Dec 13 05:56:14.718471 containerd[1496]: time="2024-12-13T05:56:14.712240834Z" level=info msg="StopPodSandbox for \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\" returns successfully" Dec 13 05:56:14.718471 containerd[1496]: time="2024-12-13T05:56:14.715208451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8447597-jg7hn,Uid:2ea0d766-8ed2-408a-bdc9-39aa78d68aef,Namespace:calico-apiserver,Attempt:1,}" Dec 13 05:56:14.717895 systemd[1]: run-netns-cni\x2d2a2ceceb\x2de245\x2d4ba8\x2d1e17\x2d39e9215a6c33.mount: Deactivated successfully. Dec 13 05:56:15.055794 systemd-networkd[1415]: cali83aad013bd6: Link UP Dec 13 05:56:15.057578 systemd-networkd[1415]: cali83aad013bd6: Gained carrier Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:14.844 [INFO][4667] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0 calico-apiserver-78d8447597- calico-apiserver 2ea0d766-8ed2-408a-bdc9-39aa78d68aef 804 0 2024-12-13 05:55:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78d8447597 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-kh3sk.gb1.brightbox.com calico-apiserver-78d8447597-jg7hn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali83aad013bd6 [] []}} ContainerID="2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-jg7hn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:14.844 [INFO][4667] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-jg7hn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:14.944 [INFO][4679] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" HandleID="k8s-pod-network.2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:14.974 [INFO][4679] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" HandleID="k8s-pod-network.2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039edf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-kh3sk.gb1.brightbox.com", "pod":"calico-apiserver-78d8447597-jg7hn", "timestamp":"2024-12-13 05:56:14.94399916 +0000 UTC"}, Hostname:"srv-kh3sk.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:14.974 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:14.974 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:14.975 [INFO][4679] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-kh3sk.gb1.brightbox.com' Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:14.979 [INFO][4679] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:14.988 [INFO][4679] ipam/ipam.go 372: Looking up existing affinities for host host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:14.996 [INFO][4679] ipam/ipam.go 489: Trying affinity for 192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:15.000 [INFO][4679] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:15.004 [INFO][4679] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:15.005 [INFO][4679] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:15.009 [INFO][4679] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:15.020 [INFO][4679] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:15.036 [INFO][4679] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.70/26] block=192.168.99.64/26 handle="k8s-pod-network.2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:15.036 [INFO][4679] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.70/26] handle="k8s-pod-network.2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" host="srv-kh3sk.gb1.brightbox.com" Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:15.036 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:15.085320 containerd[1496]: 2024-12-13 05:56:15.036 [INFO][4679] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.70/26] IPv6=[] ContainerID="2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" HandleID="k8s-pod-network.2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:15.088766 containerd[1496]: 2024-12-13 05:56:15.042 [INFO][4667] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-jg7hn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0", GenerateName:"calico-apiserver-78d8447597-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ea0d766-8ed2-408a-bdc9-39aa78d68aef", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8447597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-78d8447597-jg7hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83aad013bd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:15.088766 containerd[1496]: 2024-12-13 05:56:15.042 [INFO][4667] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.70/32] ContainerID="2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-jg7hn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:15.088766 containerd[1496]: 2024-12-13 05:56:15.043 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83aad013bd6 ContainerID="2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-jg7hn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:15.088766 containerd[1496]: 2024-12-13 05:56:15.058 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-jg7hn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:15.088766 containerd[1496]: 2024-12-13 05:56:15.059 [INFO][4667] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-jg7hn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0", GenerateName:"calico-apiserver-78d8447597-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ea0d766-8ed2-408a-bdc9-39aa78d68aef", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8447597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df", Pod:"calico-apiserver-78d8447597-jg7hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83aad013bd6", MAC:"82:09:51:92:41:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:15.088766 containerd[1496]: 2024-12-13 05:56:15.073 [INFO][4667] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df" Namespace="calico-apiserver" Pod="calico-apiserver-78d8447597-jg7hn" WorkloadEndpoint="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:15.147824 containerd[1496]: time="2024-12-13T05:56:15.147717157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:56:15.147824 containerd[1496]: time="2024-12-13T05:56:15.147790767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:56:15.149846 containerd[1496]: time="2024-12-13T05:56:15.147814523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:15.149846 containerd[1496]: time="2024-12-13T05:56:15.147926666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:15.201783 systemd[1]: run-containerd-runc-k8s.io-2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df-runc.pMoxfW.mount: Deactivated successfully. Dec 13 05:56:15.210594 systemd[1]: Started cri-containerd-2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df.scope - libcontainer container 2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df. Dec 13 05:56:15.293761 containerd[1496]: time="2024-12-13T05:56:15.293458669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8447597-jg7hn,Uid:2ea0d766-8ed2-408a-bdc9-39aa78d68aef,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df\"" Dec 13 05:56:15.993413 containerd[1496]: time="2024-12-13T05:56:15.993218342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:15.995537 containerd[1496]: time="2024-12-13T05:56:15.995485428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 05:56:15.995993 containerd[1496]: time="2024-12-13T05:56:15.995935598Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:15.999037 containerd[1496]: time="2024-12-13T05:56:15.999003452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:16.000435 containerd[1496]: time="2024-12-13T05:56:16.000250196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.051578526s" Dec 13 05:56:16.000435 containerd[1496]: time="2024-12-13T05:56:16.000297075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 05:56:16.002581 containerd[1496]: time="2024-12-13T05:56:16.002007183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 05:56:16.004385 containerd[1496]: time="2024-12-13T05:56:16.004353072Z" level=info msg="CreateContainer within sandbox \"e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 05:56:16.025672 containerd[1496]: time="2024-12-13T05:56:16.025569683Z" level=info msg="CreateContainer within sandbox \"e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"64ad9eb7f7eb6b8e5bdea81c459716014ae3c5cf3cd7238f74977495550abcdb\"" Dec 13 05:56:16.027661 containerd[1496]: time="2024-12-13T05:56:16.026506299Z" level=info msg="StartContainer for \"64ad9eb7f7eb6b8e5bdea81c459716014ae3c5cf3cd7238f74977495550abcdb\"" Dec 13 05:56:16.068400 systemd[1]: Started cri-containerd-64ad9eb7f7eb6b8e5bdea81c459716014ae3c5cf3cd7238f74977495550abcdb.scope - libcontainer container 64ad9eb7f7eb6b8e5bdea81c459716014ae3c5cf3cd7238f74977495550abcdb. Dec 13 05:56:16.134644 containerd[1496]: time="2024-12-13T05:56:16.134172209Z" level=info msg="StartContainer for \"64ad9eb7f7eb6b8e5bdea81c459716014ae3c5cf3cd7238f74977495550abcdb\" returns successfully" Dec 13 05:56:16.173367 systemd-networkd[1415]: cali83aad013bd6: Gained IPv6LL Dec 13 05:56:16.925766 kubelet[2663]: I1213 05:56:16.924773 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78d8447597-nztwj" podStartSLOduration=30.86959909 podStartE2EDuration="34.924749028s" podCreationTimestamp="2024-12-13 05:55:42 +0000 UTC" firstStartedPulling="2024-12-13 05:56:11.946658318 +0000 UTC m=+43.573942828" lastFinishedPulling="2024-12-13 05:56:16.001808244 +0000 UTC m=+47.629092766" observedRunningTime="2024-12-13 05:56:16.924564706 +0000 UTC m=+48.551849242" watchObservedRunningTime="2024-12-13 05:56:16.924749028 +0000 UTC m=+48.552033547" Dec 13 05:56:17.916364 kubelet[2663]: I1213 05:56:17.915799 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:56:18.980091 containerd[1496]: time="2024-12-13T05:56:18.980033039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:18.982843 containerd[1496]: time="2024-12-13T05:56:18.982783440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 05:56:18.985107 containerd[1496]: time="2024-12-13T05:56:18.984074898Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:18.988941 containerd[1496]: time="2024-12-13T05:56:18.988906929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:18.991474 containerd[1496]: time="2024-12-13T05:56:18.991430170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.989374207s" Dec 13 05:56:18.991574 containerd[1496]: time="2024-12-13T05:56:18.991474448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 05:56:18.995677 containerd[1496]: time="2024-12-13T05:56:18.995608159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 05:56:19.040155 containerd[1496]: time="2024-12-13T05:56:19.040034920Z" level=info msg="CreateContainer within sandbox \"9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 05:56:19.071727 containerd[1496]: time="2024-12-13T05:56:19.071533740Z" level=info msg="CreateContainer within sandbox \"9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4649097342f35ad43264016f816e5ab419f7250667cdf8aec095170028271b15\"" Dec 13 05:56:19.073080 containerd[1496]: time="2024-12-13T05:56:19.072921682Z" level=info msg="StartContainer for \"4649097342f35ad43264016f816e5ab419f7250667cdf8aec095170028271b15\"" Dec 13 05:56:19.128360 systemd[1]: Started cri-containerd-4649097342f35ad43264016f816e5ab419f7250667cdf8aec095170028271b15.scope - libcontainer container 4649097342f35ad43264016f816e5ab419f7250667cdf8aec095170028271b15. Dec 13 05:56:19.195251 containerd[1496]: time="2024-12-13T05:56:19.194939292Z" level=info msg="StartContainer for \"4649097342f35ad43264016f816e5ab419f7250667cdf8aec095170028271b15\" returns successfully" Dec 13 05:56:20.598748 containerd[1496]: time="2024-12-13T05:56:20.598667334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:20.600219 containerd[1496]: time="2024-12-13T05:56:20.600172845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 05:56:20.601077 containerd[1496]: time="2024-12-13T05:56:20.601003190Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:20.604157 containerd[1496]: time="2024-12-13T05:56:20.604026753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:20.605634 containerd[1496]: time="2024-12-13T05:56:20.605457427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.60953408s" Dec 13 05:56:20.605634 containerd[1496]: time="2024-12-13T05:56:20.605499388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 05:56:20.607046 containerd[1496]: time="2024-12-13T05:56:20.606859563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 05:56:20.609085 containerd[1496]: time="2024-12-13T05:56:20.609037909Z" level=info msg="CreateContainer within sandbox \"8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 05:56:20.632172 containerd[1496]: time="2024-12-13T05:56:20.632069270Z" level=info msg="CreateContainer within sandbox \"8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4be30aacd70665a7462c6ed15716ccca6fea4d66a0168d6025e26e1fd20be005\"" Dec 13 05:56:20.634711 containerd[1496]: time="2024-12-13T05:56:20.634592232Z" level=info msg="StartContainer for \"4be30aacd70665a7462c6ed15716ccca6fea4d66a0168d6025e26e1fd20be005\"" Dec 13 05:56:20.706389 systemd[1]: Started cri-containerd-4be30aacd70665a7462c6ed15716ccca6fea4d66a0168d6025e26e1fd20be005.scope - libcontainer container 4be30aacd70665a7462c6ed15716ccca6fea4d66a0168d6025e26e1fd20be005. Dec 13 05:56:20.755446 containerd[1496]: time="2024-12-13T05:56:20.755388613Z" level=info msg="StartContainer for \"4be30aacd70665a7462c6ed15716ccca6fea4d66a0168d6025e26e1fd20be005\" returns successfully" Dec 13 05:56:20.929394 kubelet[2663]: I1213 05:56:20.929155 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:56:20.992596 containerd[1496]: time="2024-12-13T05:56:20.991529158Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:20.993903 containerd[1496]: time="2024-12-13T05:56:20.993860787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 05:56:20.998066 containerd[1496]: time="2024-12-13T05:56:20.998007260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 391.10753ms" Dec 13 05:56:20.998242 containerd[1496]: time="2024-12-13T05:56:20.998212345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 05:56:20.999832 containerd[1496]: time="2024-12-13T05:56:20.999801156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 05:56:21.002858 containerd[1496]: time="2024-12-13T05:56:21.002379740Z" level=info msg="CreateContainer within sandbox \"2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 05:56:21.024168 containerd[1496]: time="2024-12-13T05:56:21.023960845Z" level=info msg="CreateContainer within sandbox \"2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b691e6ef430f23143fba04ff469a57e6af9f3b944fed867492b02b8e1a51f91b\"" Dec 13 05:56:21.024952 containerd[1496]: time="2024-12-13T05:56:21.024835177Z" level=info msg="StartContainer for \"b691e6ef430f23143fba04ff469a57e6af9f3b944fed867492b02b8e1a51f91b\"" Dec 13 05:56:21.066394 systemd[1]: Started cri-containerd-b691e6ef430f23143fba04ff469a57e6af9f3b944fed867492b02b8e1a51f91b.scope - libcontainer container b691e6ef430f23143fba04ff469a57e6af9f3b944fed867492b02b8e1a51f91b. Dec 13 05:56:21.131772 containerd[1496]: time="2024-12-13T05:56:21.131462470Z" level=info msg="StartContainer for \"b691e6ef430f23143fba04ff469a57e6af9f3b944fed867492b02b8e1a51f91b\" returns successfully" Dec 13 05:56:21.951568 kubelet[2663]: I1213 05:56:21.950028 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64d98ccd4f-zshqn" podStartSLOduration=32.024227388 podStartE2EDuration="38.950003753s" podCreationTimestamp="2024-12-13 05:55:43 +0000 UTC" firstStartedPulling="2024-12-13 05:56:12.067596833 +0000 UTC m=+43.694881343" lastFinishedPulling="2024-12-13 05:56:18.993373197 +0000 UTC m=+50.620657708" observedRunningTime="2024-12-13 05:56:19.945155421 +0000 UTC m=+51.572439942" watchObservedRunningTime="2024-12-13 05:56:21.950003753 +0000 UTC m=+53.577288263" Dec 13 05:56:22.254982 kubelet[2663]: I1213 05:56:22.254328 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78d8447597-jg7hn" podStartSLOduration=34.551383139 podStartE2EDuration="40.254307388s" podCreationTimestamp="2024-12-13 05:55:42 +0000 UTC" firstStartedPulling="2024-12-13 05:56:15.296574108 +0000 UTC m=+46.923858617" lastFinishedPulling="2024-12-13 05:56:20.999498352 +0000 UTC m=+52.626782866" observedRunningTime="2024-12-13 05:56:21.951503092 +0000 UTC m=+53.578787627" watchObservedRunningTime="2024-12-13 05:56:22.254307388 +0000 UTC m=+53.881591901" Dec 13 05:56:22.770302 containerd[1496]: time="2024-12-13T05:56:22.770231971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:22.771649 containerd[1496]: time="2024-12-13T05:56:22.771576669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 05:56:22.772707 containerd[1496]: time="2024-12-13T05:56:22.772632857Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:22.783793 containerd[1496]: time="2024-12-13T05:56:22.783718388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:22.785174 containerd[1496]: time="2024-12-13T05:56:22.784956325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.784320253s" Dec 13 05:56:22.785174 containerd[1496]: time="2024-12-13T05:56:22.785006375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 05:56:22.788887 containerd[1496]: time="2024-12-13T05:56:22.788845823Z" level=info msg="CreateContainer within sandbox \"8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 05:56:22.828874 containerd[1496]: time="2024-12-13T05:56:22.828757022Z" level=info msg="CreateContainer within sandbox \"8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6dad431f5b6461e4f5ad4ded940648e6b3a75eedaec495edca6fe5127cc4a468\"" Dec 13 05:56:22.830236 containerd[1496]: time="2024-12-13T05:56:22.830082842Z" level=info msg="StartContainer for \"6dad431f5b6461e4f5ad4ded940648e6b3a75eedaec495edca6fe5127cc4a468\"" Dec 13 05:56:22.879372 systemd[1]: run-containerd-runc-k8s.io-6dad431f5b6461e4f5ad4ded940648e6b3a75eedaec495edca6fe5127cc4a468-runc.qqc4GI.mount: Deactivated successfully. Dec 13 05:56:22.888355 systemd[1]: Started cri-containerd-6dad431f5b6461e4f5ad4ded940648e6b3a75eedaec495edca6fe5127cc4a468.scope - libcontainer container 6dad431f5b6461e4f5ad4ded940648e6b3a75eedaec495edca6fe5127cc4a468. Dec 13 05:56:22.929638 containerd[1496]: time="2024-12-13T05:56:22.929493504Z" level=info msg="StartContainer for \"6dad431f5b6461e4f5ad4ded940648e6b3a75eedaec495edca6fe5127cc4a468\" returns successfully" Dec 13 05:56:22.961742 kubelet[2663]: I1213 05:56:22.961557 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k4j64" podStartSLOduration=30.761658986 podStartE2EDuration="39.961525712s" podCreationTimestamp="2024-12-13 05:55:43 +0000 UTC" firstStartedPulling="2024-12-13 05:56:13.586507453 +0000 UTC m=+45.213791958" lastFinishedPulling="2024-12-13 05:56:22.786374166 +0000 UTC m=+54.413658684" observedRunningTime="2024-12-13 05:56:22.961005418 +0000 UTC m=+54.588289948" watchObservedRunningTime="2024-12-13 05:56:22.961525712 +0000 UTC m=+54.588810226" Dec 13 05:56:23.104691 kubelet[2663]: I1213 05:56:23.104595 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:56:23.947414 kubelet[2663]: I1213 05:56:23.947329 2663 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 05:56:23.958401 kubelet[2663]: I1213 05:56:23.958365 2663 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 05:56:28.524145 containerd[1496]: time="2024-12-13T05:56:28.523809992Z" level=info msg="StopPodSandbox for \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\"" Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.657 [WARNING][5014] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0", GenerateName:"calico-apiserver-78d8447597-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ea0d766-8ed2-408a-bdc9-39aa78d68aef", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8447597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df", Pod:"calico-apiserver-78d8447597-jg7hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83aad013bd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.658 [INFO][5014] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.658 [INFO][5014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" iface="eth0" netns="" Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.658 [INFO][5014] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.658 [INFO][5014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.697 [INFO][5023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" HandleID="k8s-pod-network.b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.698 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.698 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.706 [WARNING][5023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" HandleID="k8s-pod-network.b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.706 [INFO][5023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" HandleID="k8s-pod-network.b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.709 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:28.713298 containerd[1496]: 2024-12-13 05:56:28.711 [INFO][5014] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:28.713298 containerd[1496]: time="2024-12-13T05:56:28.713291298Z" level=info msg="TearDown network for sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\" successfully" Dec 13 05:56:28.715486 containerd[1496]: time="2024-12-13T05:56:28.713324318Z" level=info msg="StopPodSandbox for \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\" returns successfully" Dec 13 05:56:28.743078 containerd[1496]: time="2024-12-13T05:56:28.743017097Z" level=info msg="RemovePodSandbox for \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\"" Dec 13 05:56:28.743078 containerd[1496]: time="2024-12-13T05:56:28.743073333Z" level=info msg="Forcibly stopping sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\"" Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.798 [WARNING][5041] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0", GenerateName:"calico-apiserver-78d8447597-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ea0d766-8ed2-408a-bdc9-39aa78d68aef", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8447597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"2cabcf01908f22e1eaf7e2b3dc87710dd137042e4320615e4718b87cd236b7df", Pod:"calico-apiserver-78d8447597-jg7hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83aad013bd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.798 [INFO][5041] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.798 [INFO][5041] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" iface="eth0" netns="" Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.798 [INFO][5041] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.798 [INFO][5041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.830 [INFO][5047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" HandleID="k8s-pod-network.b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.831 [INFO][5047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.831 [INFO][5047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.839 [WARNING][5047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" HandleID="k8s-pod-network.b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.839 [INFO][5047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" HandleID="k8s-pod-network.b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--jg7hn-eth0" Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.841 [INFO][5047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:28.845533 containerd[1496]: 2024-12-13 05:56:28.843 [INFO][5041] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4" Dec 13 05:56:28.847537 containerd[1496]: time="2024-12-13T05:56:28.845637805Z" level=info msg="TearDown network for sandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\" successfully" Dec 13 05:56:28.849611 containerd[1496]: time="2024-12-13T05:56:28.849575782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:56:28.861108 containerd[1496]: time="2024-12-13T05:56:28.861065521Z" level=info msg="RemovePodSandbox \"b0fbbb7e307b91f729e6ed3d9b8328aad22d62d211213bcd356317beffdb1bd4\" returns successfully" Dec 13 05:56:28.862176 containerd[1496]: time="2024-12-13T05:56:28.862050492Z" level=info msg="StopPodSandbox for \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\"" Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.921 [WARNING][5065] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0", GenerateName:"calico-apiserver-78d8447597-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4ca8aab-4bca-427a-b3f8-a2ed09442b71", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8447597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef", Pod:"calico-apiserver-78d8447597-nztwj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3450b8998f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.921 [INFO][5065] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.921 [INFO][5065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" iface="eth0" netns="" Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.921 [INFO][5065] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.921 [INFO][5065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.955 [INFO][5071] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" HandleID="k8s-pod-network.3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.955 [INFO][5071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.956 [INFO][5071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.964 [WARNING][5071] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" HandleID="k8s-pod-network.3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.964 [INFO][5071] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" HandleID="k8s-pod-network.3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.966 [INFO][5071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:28.970064 containerd[1496]: 2024-12-13 05:56:28.968 [INFO][5065] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:28.970064 containerd[1496]: time="2024-12-13T05:56:28.969993261Z" level=info msg="TearDown network for sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\" successfully" Dec 13 05:56:28.970064 containerd[1496]: time="2024-12-13T05:56:28.970036071Z" level=info msg="StopPodSandbox for \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\" returns successfully" Dec 13 05:56:28.972386 containerd[1496]: time="2024-12-13T05:56:28.970501232Z" level=info msg="RemovePodSandbox for \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\"" Dec 13 05:56:28.972386 containerd[1496]: time="2024-12-13T05:56:28.970549292Z" level=info msg="Forcibly stopping sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\"" Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.023 [WARNING][5089] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0", GenerateName:"calico-apiserver-78d8447597-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4ca8aab-4bca-427a-b3f8-a2ed09442b71", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8447597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"e1d2f17cdd2ed6fd8abae7deb16100546087ea843ce79bf9aae456d2a5438fef", Pod:"calico-apiserver-78d8447597-nztwj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3450b8998f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.023 [INFO][5089] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.023 [INFO][5089] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" iface="eth0" netns="" Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.023 [INFO][5089] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.023 [INFO][5089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.054 [INFO][5095] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" HandleID="k8s-pod-network.3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.054 [INFO][5095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.054 [INFO][5095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.064 [WARNING][5095] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" HandleID="k8s-pod-network.3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.064 [INFO][5095] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" HandleID="k8s-pod-network.3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--apiserver--78d8447597--nztwj-eth0" Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.067 [INFO][5095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:29.070791 containerd[1496]: 2024-12-13 05:56:29.069 [INFO][5089] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc" Dec 13 05:56:29.071891 containerd[1496]: time="2024-12-13T05:56:29.070892732Z" level=info msg="TearDown network for sandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\" successfully" Dec 13 05:56:29.074097 containerd[1496]: time="2024-12-13T05:56:29.074058893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:56:29.074301 containerd[1496]: time="2024-12-13T05:56:29.074189203Z" level=info msg="RemovePodSandbox \"3a93f27fd6f69867a79ef5bae65bb733441a7441bc3a9c31dec8bc780f1117cc\" returns successfully" Dec 13 05:56:29.075062 containerd[1496]: time="2024-12-13T05:56:29.074919916Z" level=info msg="StopPodSandbox for \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\"" Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.150 [WARNING][5113] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bce5c938-2412-4bd8-bdc5-eb3ae91af6a3", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8", Pod:"coredns-6f6b679f8f-5dh4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97355072a74", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.150 [INFO][5113] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.150 [INFO][5113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" iface="eth0" netns="" Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.151 [INFO][5113] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.151 [INFO][5113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.189 [INFO][5120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" HandleID="k8s-pod-network.962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.189 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.189 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.198 [WARNING][5120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" HandleID="k8s-pod-network.962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.198 [INFO][5120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" HandleID="k8s-pod-network.962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.200 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:29.205411 containerd[1496]: 2024-12-13 05:56:29.202 [INFO][5113] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:29.205411 containerd[1496]: time="2024-12-13T05:56:29.204957163Z" level=info msg="TearDown network for sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\" successfully" Dec 13 05:56:29.205411 containerd[1496]: time="2024-12-13T05:56:29.204993991Z" level=info msg="StopPodSandbox for \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\" returns successfully" Dec 13 05:56:29.207761 containerd[1496]: time="2024-12-13T05:56:29.205965913Z" level=info msg="RemovePodSandbox for \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\"" Dec 13 05:56:29.207761 containerd[1496]: time="2024-12-13T05:56:29.206004344Z" level=info msg="Forcibly stopping sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\"" Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.261 [WARNING][5138] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bce5c938-2412-4bd8-bdc5-eb3ae91af6a3", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"03fd7f67d3d90689bcae2f6408af045f9be7dda48a0913c189032a06700f7fd8", Pod:"coredns-6f6b679f8f-5dh4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97355072a74", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.261 [INFO][5138] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.261 [INFO][5138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" iface="eth0" netns="" Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.261 [INFO][5138] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.261 [INFO][5138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.291 [INFO][5144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" HandleID="k8s-pod-network.962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.291 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.291 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.302 [WARNING][5144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" HandleID="k8s-pod-network.962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.302 [INFO][5144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" HandleID="k8s-pod-network.962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--5dh4s-eth0" Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.304 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:29.308919 containerd[1496]: 2024-12-13 05:56:29.307 [INFO][5138] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046" Dec 13 05:56:29.310400 containerd[1496]: time="2024-12-13T05:56:29.308963510Z" level=info msg="TearDown network for sandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\" successfully" Dec 13 05:56:29.313902 containerd[1496]: time="2024-12-13T05:56:29.313834764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:56:29.313981 containerd[1496]: time="2024-12-13T05:56:29.313927557Z" level=info msg="RemovePodSandbox \"962ab478e73d78e80649aa31220a19563f5cf9141947e7399fa9b67abc4a5046\" returns successfully" Dec 13 05:56:29.314732 containerd[1496]: time="2024-12-13T05:56:29.314649971Z" level=info msg="StopPodSandbox for \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\"" Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.407 [WARNING][5163] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1222d645-a9c3-4118-adcb-40dbd48ca62c", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8", Pod:"coredns-6f6b679f8f-lqlzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0f8ec0edf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.409 [INFO][5163] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.409 [INFO][5163] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" iface="eth0" netns="" Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.409 [INFO][5163] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.409 [INFO][5163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.461 [INFO][5169] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" HandleID="k8s-pod-network.c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.461 [INFO][5169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.461 [INFO][5169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.471 [WARNING][5169] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" HandleID="k8s-pod-network.c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.472 [INFO][5169] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" HandleID="k8s-pod-network.c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.474 [INFO][5169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:29.478487 containerd[1496]: 2024-12-13 05:56:29.476 [INFO][5163] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:29.478487 containerd[1496]: time="2024-12-13T05:56:29.478254692Z" level=info msg="TearDown network for sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\" successfully" Dec 13 05:56:29.478487 containerd[1496]: time="2024-12-13T05:56:29.478317510Z" level=info msg="StopPodSandbox for \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\" returns successfully" Dec 13 05:56:29.480545 containerd[1496]: time="2024-12-13T05:56:29.478981875Z" level=info msg="RemovePodSandbox for \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\"" Dec 13 05:56:29.480545 containerd[1496]: time="2024-12-13T05:56:29.479020681Z" level=info msg="Forcibly stopping sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\"" Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.532 [WARNING][5187] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1222d645-a9c3-4118-adcb-40dbd48ca62c", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"11c1eab641565afbb5354d75063d8fbc8253e65c06d2d2b1ccf1e01874fb05f8", Pod:"coredns-6f6b679f8f-lqlzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0f8ec0edf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.533 [INFO][5187] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.533 [INFO][5187] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" iface="eth0" netns="" Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.533 [INFO][5187] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.533 [INFO][5187] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.567 [INFO][5193] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" HandleID="k8s-pod-network.c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.567 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.567 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.576 [WARNING][5193] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" HandleID="k8s-pod-network.c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.576 [INFO][5193] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" HandleID="k8s-pod-network.c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-coredns--6f6b679f8f--lqlzm-eth0" Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.578 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:29.582165 containerd[1496]: 2024-12-13 05:56:29.580 [INFO][5187] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3" Dec 13 05:56:29.583478 containerd[1496]: time="2024-12-13T05:56:29.582223250Z" level=info msg="TearDown network for sandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\" successfully" Dec 13 05:56:29.585924 containerd[1496]: time="2024-12-13T05:56:29.585886880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:56:29.586070 containerd[1496]: time="2024-12-13T05:56:29.585952239Z" level=info msg="RemovePodSandbox \"c60fb92db27d926b4590764131af4dbc77534b9a9b94b62b7b9c34ba185039e3\" returns successfully" Dec 13 05:56:29.586932 containerd[1496]: time="2024-12-13T05:56:29.586588511Z" level=info msg="StopPodSandbox for \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\"" Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.639 [WARNING][5211] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0", GenerateName:"calico-kube-controllers-64d98ccd4f-", Namespace:"calico-system", SelfLink:"", UID:"7b95cf7e-357f-49c3-a82d-c2fe116243f9", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d98ccd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a", Pod:"calico-kube-controllers-64d98ccd4f-zshqn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfd924424f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.639 [INFO][5211] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.639 [INFO][5211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" iface="eth0" netns="" Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.639 [INFO][5211] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.639 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.673 [INFO][5218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" HandleID="k8s-pod-network.c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.673 [INFO][5218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.673 [INFO][5218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.682 [WARNING][5218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" HandleID="k8s-pod-network.c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.683 [INFO][5218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" HandleID="k8s-pod-network.c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.685 [INFO][5218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:29.689085 containerd[1496]: 2024-12-13 05:56:29.687 [INFO][5211] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:29.689937 containerd[1496]: time="2024-12-13T05:56:29.689752053Z" level=info msg="TearDown network for sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\" successfully" Dec 13 05:56:29.689937 containerd[1496]: time="2024-12-13T05:56:29.689801816Z" level=info msg="StopPodSandbox for \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\" returns successfully" Dec 13 05:56:29.690482 containerd[1496]: time="2024-12-13T05:56:29.690441193Z" level=info msg="RemovePodSandbox for \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\"" Dec 13 05:56:29.690482 containerd[1496]: time="2024-12-13T05:56:29.690488099Z" level=info msg="Forcibly stopping sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\"" Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.761 [WARNING][5236] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0", GenerateName:"calico-kube-controllers-64d98ccd4f-", Namespace:"calico-system", SelfLink:"", UID:"7b95cf7e-357f-49c3-a82d-c2fe116243f9", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d98ccd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"9e288ce354751b527fa48d6baac6e4c4da26fc27bbc3d7cfbd2e5ca7a2dfdc8a", Pod:"calico-kube-controllers-64d98ccd4f-zshqn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfd924424f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.761 [INFO][5236] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.761 [INFO][5236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" iface="eth0" netns="" Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.761 [INFO][5236] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.761 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.793 [INFO][5242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" HandleID="k8s-pod-network.c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.794 [INFO][5242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.794 [INFO][5242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.802 [WARNING][5242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" HandleID="k8s-pod-network.c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.803 [INFO][5242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" HandleID="k8s-pod-network.c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Workload="srv--kh3sk.gb1.brightbox.com-k8s-calico--kube--controllers--64d98ccd4f--zshqn-eth0" Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.804 [INFO][5242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:29.808498 containerd[1496]: 2024-12-13 05:56:29.806 [INFO][5236] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3" Dec 13 05:56:29.809304 containerd[1496]: time="2024-12-13T05:56:29.808582280Z" level=info msg="TearDown network for sandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\" successfully" Dec 13 05:56:29.812300 containerd[1496]: time="2024-12-13T05:56:29.812250848Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:56:29.812472 containerd[1496]: time="2024-12-13T05:56:29.812323994Z" level=info msg="RemovePodSandbox \"c027cea2ca5249a2471bef65d41b47526a362137c06c5898f59893049787bda3\" returns successfully" Dec 13 05:56:29.813113 containerd[1496]: time="2024-12-13T05:56:29.813014625Z" level=info msg="StopPodSandbox for \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\"" Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.870 [WARNING][5260] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c45b9e9-6ab2-4973-81f3-6dae0df5a18c", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac", Pod:"csi-node-driver-k4j64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd042bd2073", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.871 [INFO][5260] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.871 [INFO][5260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" iface="eth0" netns="" Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.871 [INFO][5260] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.871 [INFO][5260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.902 [INFO][5266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" HandleID="k8s-pod-network.8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.902 [INFO][5266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.902 [INFO][5266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.911 [WARNING][5266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" HandleID="k8s-pod-network.8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.911 [INFO][5266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" HandleID="k8s-pod-network.8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.913 [INFO][5266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:29.916670 containerd[1496]: 2024-12-13 05:56:29.914 [INFO][5260] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:29.918649 containerd[1496]: time="2024-12-13T05:56:29.916715427Z" level=info msg="TearDown network for sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\" successfully" Dec 13 05:56:29.918649 containerd[1496]: time="2024-12-13T05:56:29.916758269Z" level=info msg="StopPodSandbox for \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\" returns successfully" Dec 13 05:56:29.918649 containerd[1496]: time="2024-12-13T05:56:29.917922981Z" level=info msg="RemovePodSandbox for \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\"" Dec 13 05:56:29.918649 containerd[1496]: time="2024-12-13T05:56:29.917961779Z" level=info msg="Forcibly stopping sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\"" Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:29.975 [WARNING][5284] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c45b9e9-6ab2-4973-81f3-6dae0df5a18c", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-kh3sk.gb1.brightbox.com", ContainerID:"8aa559fb9506fcd55bc59075e67859fc7f8bc273ecf398356c663d22dc7233ac", Pod:"csi-node-driver-k4j64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd042bd2073", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:29.976 [INFO][5284] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:29.976 [INFO][5284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" iface="eth0" netns="" Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:29.976 [INFO][5284] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:29.976 [INFO][5284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:30.021 [INFO][5290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" HandleID="k8s-pod-network.8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:30.021 [INFO][5290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:30.022 [INFO][5290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:30.030 [WARNING][5290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" HandleID="k8s-pod-network.8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:30.030 [INFO][5290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" HandleID="k8s-pod-network.8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Workload="srv--kh3sk.gb1.brightbox.com-k8s-csi--node--driver--k4j64-eth0" Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:30.034 [INFO][5290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:56:30.038117 containerd[1496]: 2024-12-13 05:56:30.036 [INFO][5284] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc" Dec 13 05:56:30.038922 containerd[1496]: time="2024-12-13T05:56:30.038229838Z" level=info msg="TearDown network for sandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\" successfully" Dec 13 05:56:30.041613 containerd[1496]: time="2024-12-13T05:56:30.041571435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:56:30.041690 containerd[1496]: time="2024-12-13T05:56:30.041638583Z" level=info msg="RemovePodSandbox \"8c7b89e54d40f3920786617a4719cb50fd13a232e5bd4a2e573ddf7f9944d8cc\" returns successfully" Dec 13 05:56:39.526231 kubelet[2663]: I1213 05:56:39.526014 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:56:41.151449 systemd[1]: Started sshd@9-10.230.15.170:22-147.75.109.163:35934.service - OpenSSH per-connection server daemon (147.75.109.163:35934). Dec 13 05:56:42.124215 sshd[5336]: Accepted publickey for core from 147.75.109.163 port 35934 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:42.127644 sshd[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:42.147360 systemd-logind[1482]: New session 12 of user core. Dec 13 05:56:42.154763 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 05:56:43.359532 sshd[5336]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:43.365633 systemd[1]: sshd@9-10.230.15.170:22-147.75.109.163:35934.service: Deactivated successfully. Dec 13 05:56:43.369074 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 05:56:43.370224 systemd-logind[1482]: Session 12 logged out. Waiting for processes to exit. Dec 13 05:56:43.373931 systemd-logind[1482]: Removed session 12. Dec 13 05:56:48.524246 systemd[1]: Started sshd@10-10.230.15.170:22-147.75.109.163:42564.service - OpenSSH per-connection server daemon (147.75.109.163:42564). Dec 13 05:56:49.444082 sshd[5350]: Accepted publickey for core from 147.75.109.163 port 42564 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:49.446493 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:49.456554 systemd-logind[1482]: New session 13 of user core. Dec 13 05:56:49.463572 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 05:56:50.194676 sshd[5350]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:50.200372 systemd[1]: sshd@10-10.230.15.170:22-147.75.109.163:42564.service: Deactivated successfully. Dec 13 05:56:50.205535 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 05:56:50.207021 systemd-logind[1482]: Session 13 logged out. Waiting for processes to exit. Dec 13 05:56:50.208587 systemd-logind[1482]: Removed session 13. Dec 13 05:56:53.160861 systemd[1]: run-containerd-runc-k8s.io-4649097342f35ad43264016f816e5ab419f7250667cdf8aec095170028271b15-runc.nORtfG.mount: Deactivated successfully. Dec 13 05:56:55.360486 systemd[1]: Started sshd@11-10.230.15.170:22-147.75.109.163:42574.service - OpenSSH per-connection server daemon (147.75.109.163:42574). Dec 13 05:56:56.296678 sshd[5392]: Accepted publickey for core from 147.75.109.163 port 42574 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:56.298938 sshd[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:56.306556 systemd-logind[1482]: New session 14 of user core. Dec 13 05:56:56.315401 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 05:56:57.026919 sshd[5392]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:57.031077 systemd[1]: sshd@11-10.230.15.170:22-147.75.109.163:42574.service: Deactivated successfully. Dec 13 05:56:57.033737 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 05:56:57.036024 systemd-logind[1482]: Session 14 logged out. Waiting for processes to exit. Dec 13 05:56:57.037612 systemd-logind[1482]: Removed session 14. Dec 13 05:56:57.186560 systemd[1]: Started sshd@12-10.230.15.170:22-147.75.109.163:50544.service - OpenSSH per-connection server daemon (147.75.109.163:50544). Dec 13 05:56:58.074450 sshd[5406]: Accepted publickey for core from 147.75.109.163 port 50544 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:58.076915 sshd[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:58.083368 systemd-logind[1482]: New session 15 of user core. Dec 13 05:56:58.090327 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 05:56:58.853471 sshd[5406]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:58.858932 systemd-logind[1482]: Session 15 logged out. Waiting for processes to exit. Dec 13 05:56:58.860333 systemd[1]: sshd@12-10.230.15.170:22-147.75.109.163:50544.service: Deactivated successfully. Dec 13 05:56:58.863490 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 05:56:58.865264 systemd-logind[1482]: Removed session 15. Dec 13 05:56:59.015454 systemd[1]: Started sshd@13-10.230.15.170:22-147.75.109.163:50560.service - OpenSSH per-connection server daemon (147.75.109.163:50560). Dec 13 05:56:59.912600 sshd[5417]: Accepted publickey for core from 147.75.109.163 port 50560 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:59.918382 sshd[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:59.929241 systemd-logind[1482]: New session 16 of user core. Dec 13 05:56:59.937337 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 05:57:00.649520 sshd[5417]: pam_unix(sshd:session): session closed for user core Dec 13 05:57:00.654847 systemd[1]: sshd@13-10.230.15.170:22-147.75.109.163:50560.service: Deactivated successfully. Dec 13 05:57:00.657282 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 05:57:00.658639 systemd-logind[1482]: Session 16 logged out. Waiting for processes to exit. Dec 13 05:57:00.660251 systemd-logind[1482]: Removed session 16. Dec 13 05:57:05.807544 systemd[1]: Started sshd@14-10.230.15.170:22-147.75.109.163:50576.service - OpenSSH per-connection server daemon (147.75.109.163:50576). Dec 13 05:57:06.755462 sshd[5461]: Accepted publickey for core from 147.75.109.163 port 50576 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:57:06.758879 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:57:06.767017 systemd-logind[1482]: New session 17 of user core. Dec 13 05:57:06.776852 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 05:57:07.481018 sshd[5461]: pam_unix(sshd:session): session closed for user core Dec 13 05:57:07.485896 systemd[1]: sshd@14-10.230.15.170:22-147.75.109.163:50576.service: Deactivated successfully. Dec 13 05:57:07.488450 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 05:57:07.490048 systemd-logind[1482]: Session 17 logged out. Waiting for processes to exit. Dec 13 05:57:07.491434 systemd-logind[1482]: Removed session 17. Dec 13 05:57:12.644554 systemd[1]: Started sshd@15-10.230.15.170:22-147.75.109.163:46676.service - OpenSSH per-connection server daemon (147.75.109.163:46676). Dec 13 05:57:13.558160 sshd[5474]: Accepted publickey for core from 147.75.109.163 port 46676 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:57:13.561050 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:57:13.568369 systemd-logind[1482]: New session 18 of user core. Dec 13 05:57:13.574355 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 05:57:14.284779 sshd[5474]: pam_unix(sshd:session): session closed for user core Dec 13 05:57:14.290726 systemd[1]: sshd@15-10.230.15.170:22-147.75.109.163:46676.service: Deactivated successfully. Dec 13 05:57:14.294364 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 05:57:14.295607 systemd-logind[1482]: Session 18 logged out. Waiting for processes to exit. Dec 13 05:57:14.297139 systemd-logind[1482]: Removed session 18. Dec 13 05:57:16.399567 systemd[1]: run-containerd-runc-k8s.io-4649097342f35ad43264016f816e5ab419f7250667cdf8aec095170028271b15-runc.pxI1OK.mount: Deactivated successfully. Dec 13 05:57:19.446471 systemd[1]: Started sshd@16-10.230.15.170:22-147.75.109.163:54142.service - OpenSSH per-connection server daemon (147.75.109.163:54142). Dec 13 05:57:20.351603 sshd[5507]: Accepted publickey for core from 147.75.109.163 port 54142 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:57:20.353784 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:57:20.360328 systemd-logind[1482]: New session 19 of user core. Dec 13 05:57:20.366316 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 05:57:21.087873 sshd[5507]: pam_unix(sshd:session): session closed for user core Dec 13 05:57:21.092873 systemd[1]: sshd@16-10.230.15.170:22-147.75.109.163:54142.service: Deactivated successfully. Dec 13 05:57:21.096686 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 05:57:21.098721 systemd-logind[1482]: Session 19 logged out. Waiting for processes to exit. Dec 13 05:57:21.100397 systemd-logind[1482]: Removed session 19. Dec 13 05:57:21.246520 systemd[1]: Started sshd@17-10.230.15.170:22-147.75.109.163:54150.service - OpenSSH per-connection server daemon (147.75.109.163:54150). Dec 13 05:57:22.143671 sshd[5520]: Accepted publickey for core from 147.75.109.163 port 54150 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:57:22.146017 sshd[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:57:22.153632 systemd-logind[1482]: New session 20 of user core. Dec 13 05:57:22.162351 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 05:57:23.236571 sshd[5520]: pam_unix(sshd:session): session closed for user core Dec 13 05:57:23.245537 systemd[1]: sshd@17-10.230.15.170:22-147.75.109.163:54150.service: Deactivated successfully. Dec 13 05:57:23.250504 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 05:57:23.252153 systemd-logind[1482]: Session 20 logged out. Waiting for processes to exit. Dec 13 05:57:23.253819 systemd-logind[1482]: Removed session 20. Dec 13 05:57:23.393184 systemd[1]: Started sshd@18-10.230.15.170:22-147.75.109.163:54152.service - OpenSSH per-connection server daemon (147.75.109.163:54152). Dec 13 05:57:24.304024 sshd[5551]: Accepted publickey for core from 147.75.109.163 port 54152 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:57:24.307268 sshd[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:57:24.317057 systemd-logind[1482]: New session 21 of user core. Dec 13 05:57:24.323657 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 05:57:27.762356 sshd[5551]: pam_unix(sshd:session): session closed for user core Dec 13 05:57:27.775605 systemd[1]: sshd@18-10.230.15.170:22-147.75.109.163:54152.service: Deactivated successfully. Dec 13 05:57:27.778996 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 05:57:27.781054 systemd-logind[1482]: Session 21 logged out. Waiting for processes to exit. Dec 13 05:57:27.782761 systemd-logind[1482]: Removed session 21. Dec 13 05:57:27.919374 systemd[1]: Started sshd@19-10.230.15.170:22-147.75.109.163:38268.service - OpenSSH per-connection server daemon (147.75.109.163:38268). Dec 13 05:57:28.836533 sshd[5570]: Accepted publickey for core from 147.75.109.163 port 38268 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:57:28.838700 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:57:28.845973 systemd-logind[1482]: New session 22 of user core. Dec 13 05:57:28.855411 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 05:57:30.091277 sshd[5570]: pam_unix(sshd:session): session closed for user core Dec 13 05:57:30.096263 systemd-logind[1482]: Session 22 logged out. Waiting for processes to exit. Dec 13 05:57:30.096788 systemd[1]: sshd@19-10.230.15.170:22-147.75.109.163:38268.service: Deactivated successfully. Dec 13 05:57:30.099644 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 05:57:30.102172 systemd-logind[1482]: Removed session 22. Dec 13 05:57:30.250180 systemd[1]: Started sshd@20-10.230.15.170:22-147.75.109.163:38274.service - OpenSSH per-connection server daemon (147.75.109.163:38274). Dec 13 05:57:31.180978 sshd[5583]: Accepted publickey for core from 147.75.109.163 port 38274 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:57:31.183758 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:57:31.190813 systemd-logind[1482]: New session 23 of user core. Dec 13 05:57:31.196390 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 05:57:31.992038 sshd[5583]: pam_unix(sshd:session): session closed for user core Dec 13 05:57:31.997590 systemd-logind[1482]: Session 23 logged out. Waiting for processes to exit. Dec 13 05:57:31.998488 systemd[1]: sshd@20-10.230.15.170:22-147.75.109.163:38274.service: Deactivated successfully. Dec 13 05:57:32.001490 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 05:57:32.003035 systemd-logind[1482]: Removed session 23. Dec 13 05:57:37.149578 systemd[1]: Started sshd@21-10.230.15.170:22-147.75.109.163:56524.service - OpenSSH per-connection server daemon (147.75.109.163:56524). Dec 13 05:57:38.054172 sshd[5630]: Accepted publickey for core from 147.75.109.163 port 56524 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:57:38.057332 sshd[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:57:38.064496 systemd-logind[1482]: New session 24 of user core. Dec 13 05:57:38.071395 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 05:57:38.820613 sshd[5630]: pam_unix(sshd:session): session closed for user core Dec 13 05:57:38.828283 systemd[1]: sshd@21-10.230.15.170:22-147.75.109.163:56524.service: Deactivated successfully. Dec 13 05:57:38.833964 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 05:57:38.835540 systemd-logind[1482]: Session 24 logged out. Waiting for processes to exit. Dec 13 05:57:38.837632 systemd-logind[1482]: Removed session 24. Dec 13 05:57:43.981561 systemd[1]: Started sshd@22-10.230.15.170:22-147.75.109.163:56540.service - OpenSSH per-connection server daemon (147.75.109.163:56540). Dec 13 05:57:44.896046 sshd[5643]: Accepted publickey for core from 147.75.109.163 port 56540 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:57:44.898699 sshd[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:57:44.908274 systemd-logind[1482]: New session 25 of user core. Dec 13 05:57:44.916393 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 05:57:45.621610 sshd[5643]: pam_unix(sshd:session): session closed for user core Dec 13 05:57:45.626550 systemd[1]: sshd@22-10.230.15.170:22-147.75.109.163:56540.service: Deactivated successfully. Dec 13 05:57:45.629119 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 05:57:45.630049 systemd-logind[1482]: Session 25 logged out. Waiting for processes to exit. Dec 13 05:57:45.631663 systemd-logind[1482]: Removed session 25. Dec 13 05:57:50.789579 systemd[1]: Started sshd@23-10.230.15.170:22-147.75.109.163:36724.service - OpenSSH per-connection server daemon (147.75.109.163:36724). Dec 13 05:57:51.687380 sshd[5671]: Accepted publickey for core from 147.75.109.163 port 36724 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:57:51.690857 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:57:51.698422 systemd-logind[1482]: New session 26 of user core. Dec 13 05:57:51.710372 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 05:57:52.526192 sshd[5671]: pam_unix(sshd:session): session closed for user core Dec 13 05:57:52.531694 systemd[1]: sshd@23-10.230.15.170:22-147.75.109.163:36724.service: Deactivated successfully. Dec 13 05:57:52.534663 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 05:57:52.535981 systemd-logind[1482]: Session 26 logged out. Waiting for processes to exit. Dec 13 05:57:52.537294 systemd-logind[1482]: Removed session 26.