Dec 16 09:42:09.887840 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 16 09:42:09.887863 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:42:09.887871 kernel: BIOS-provided physical RAM map: Dec 16 09:42:09.887877 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 09:42:09.887882 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 09:42:09.887887 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 09:42:09.887894 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 16 09:42:09.887899 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 16 09:42:09.887907 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 09:42:09.887912 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 09:42:09.887918 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 09:42:09.887924 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 09:42:09.887929 kernel: NX (Execute Disable) protection: active Dec 16 09:42:09.887935 kernel: APIC: Static calls initialized Dec 16 09:42:09.887943 kernel: SMBIOS 2.8 present. Dec 16 09:42:09.887950 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 16 09:42:09.887956 kernel: Hypervisor detected: KVM Dec 16 09:42:09.887961 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 09:42:09.887967 kernel: kvm-clock: using sched offset of 2965799038 cycles Dec 16 09:42:09.887973 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 09:42:09.887980 kernel: tsc: Detected 2445.406 MHz processor Dec 16 09:42:09.887986 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 09:42:09.887992 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 09:42:09.888001 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 16 09:42:09.888007 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 09:42:09.888013 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 09:42:09.888019 kernel: Using GB pages for direct mapping Dec 16 09:42:09.888025 kernel: ACPI: Early table checksum verification disabled Dec 16 09:42:09.888031 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 16 09:42:09.888037 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:42:09.888043 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:42:09.888050 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:42:09.888058 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 16 09:42:09.888064 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:42:09.888070 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:42:09.888076 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:42:09.888082 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:42:09.888088 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 16 09:42:09.888094 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 16 09:42:09.888100 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 16 09:42:09.888111 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 16 09:42:09.888118 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 16 09:42:09.888124 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 16 09:42:09.888130 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 16 09:42:09.888137 kernel: No NUMA configuration found Dec 16 09:42:09.888143 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 16 09:42:09.888151 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 16 09:42:09.888158 kernel: Zone ranges: Dec 16 09:42:09.888175 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 09:42:09.888181 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 16 09:42:09.888188 kernel: Normal empty Dec 16 09:42:09.888194 kernel: Movable zone start for each node Dec 16 09:42:09.888200 kernel: Early memory node ranges Dec 16 09:42:09.888207 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 09:42:09.888213 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 16 09:42:09.888219 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 16 09:42:09.888228 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 09:42:09.888242 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 09:42:09.888249 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 16 09:42:09.888255 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 09:42:09.888261 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 09:42:09.888268 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 09:42:09.888274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 09:42:09.888280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 09:42:09.888287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 09:42:09.888295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 09:42:09.888302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 09:42:09.888308 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 09:42:09.888314 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 09:42:09.888321 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 16 09:42:09.888327 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 09:42:09.888334 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 09:42:09.888340 kernel: Booting paravirtualized kernel on KVM Dec 16 09:42:09.888346 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 09:42:09.888355 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 09:42:09.888361 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 16 09:42:09.888368 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 16 09:42:09.888374 kernel: pcpu-alloc: [0] 0 1 Dec 16 09:42:09.888388 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 16 09:42:09.888396 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:42:09.888402 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 16 09:42:09.888409 kernel: random: crng init done Dec 16 09:42:09.888418 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 09:42:09.888424 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 09:42:09.888431 kernel: Fallback order for Node 0: 0 Dec 16 09:42:09.888437 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 16 09:42:09.888444 kernel: Policy zone: DMA32 Dec 16 09:42:09.888450 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 09:42:09.888457 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 16 09:42:09.888463 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 09:42:09.888469 kernel: ftrace: allocating 37902 entries in 149 pages Dec 16 09:42:09.888478 kernel: ftrace: allocated 149 pages with 4 groups Dec 16 09:42:09.888484 kernel: Dynamic Preempt: voluntary Dec 16 09:42:09.888490 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 09:42:09.888498 kernel: rcu: RCU event tracing is enabled. Dec 16 09:42:09.888505 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 09:42:09.888511 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 09:42:09.888518 kernel: Rude variant of Tasks RCU enabled. Dec 16 09:42:09.888524 kernel: Tracing variant of Tasks RCU enabled. Dec 16 09:42:09.888530 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 09:42:09.888537 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 09:42:09.888545 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 09:42:09.888552 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 09:42:09.888558 kernel: Console: colour VGA+ 80x25 Dec 16 09:42:09.888565 kernel: printk: console [tty0] enabled Dec 16 09:42:09.888571 kernel: printk: console [ttyS0] enabled Dec 16 09:42:09.888577 kernel: ACPI: Core revision 20230628 Dec 16 09:42:09.888584 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 09:42:09.888590 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 09:42:09.888597 kernel: x2apic enabled Dec 16 09:42:09.888605 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 09:42:09.888611 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 09:42:09.888618 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 16 09:42:09.888624 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Dec 16 09:42:09.888631 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 09:42:09.888637 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 09:42:09.888643 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 09:42:09.888650 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 09:42:09.888665 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 09:42:09.888672 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 16 09:42:09.888679 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 16 09:42:09.888688 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 09:42:09.888694 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 09:42:09.888701 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 09:42:09.888708 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 09:42:09.888715 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 09:42:09.888722 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 09:42:09.888729 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 09:42:09.888736 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 09:42:09.888745 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 09:42:09.888751 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 09:42:09.888758 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 09:42:09.888793 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 09:42:09.888800 kernel: Freeing SMP alternatives memory: 32K Dec 16 09:42:09.888809 kernel: pid_max: default: 32768 minimum: 301 Dec 16 09:42:09.888816 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 16 09:42:09.888823 kernel: landlock: Up and running. Dec 16 09:42:09.888829 kernel: SELinux: Initializing. Dec 16 09:42:09.888836 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 09:42:09.888843 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 09:42:09.888850 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 09:42:09.888857 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:42:09.888863 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:42:09.888872 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:42:09.888879 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 09:42:09.888886 kernel: ... version: 0 Dec 16 09:42:09.888893 kernel: ... bit width: 48 Dec 16 09:42:09.888899 kernel: ... generic registers: 6 Dec 16 09:42:09.888906 kernel: ... value mask: 0000ffffffffffff Dec 16 09:42:09.888913 kernel: ... max period: 00007fffffffffff Dec 16 09:42:09.888920 kernel: ... fixed-purpose events: 0 Dec 16 09:42:09.888926 kernel: ... event mask: 000000000000003f Dec 16 09:42:09.888935 kernel: signal: max sigframe size: 1776 Dec 16 09:42:09.888942 kernel: rcu: Hierarchical SRCU implementation. Dec 16 09:42:09.888949 kernel: rcu: Max phase no-delay instances is 400. Dec 16 09:42:09.888955 kernel: smp: Bringing up secondary CPUs ... Dec 16 09:42:09.888962 kernel: smpboot: x86: Booting SMP configuration: Dec 16 09:42:09.888969 kernel: .... node #0, CPUs: #1 Dec 16 09:42:09.888976 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 09:42:09.888982 kernel: smpboot: Max logical packages: 1 Dec 16 09:42:09.888989 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Dec 16 09:42:09.888996 kernel: devtmpfs: initialized Dec 16 09:42:09.889004 kernel: x86/mm: Memory block size: 128MB Dec 16 09:42:09.889011 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 09:42:09.889018 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 09:42:09.889025 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 09:42:09.889032 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 09:42:09.889038 kernel: audit: initializing netlink subsys (disabled) Dec 16 09:42:09.889045 kernel: audit: type=2000 audit(1734342129.071:1): state=initialized audit_enabled=0 res=1 Dec 16 09:42:09.889052 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 09:42:09.889061 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 09:42:09.889067 kernel: cpuidle: using governor menu Dec 16 09:42:09.889074 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 09:42:09.889081 kernel: dca service started, version 1.12.1 Dec 16 09:42:09.889088 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 16 09:42:09.889094 kernel: PCI: Using configuration type 1 for base access Dec 16 09:42:09.889101 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 09:42:09.889108 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 09:42:09.889115 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 09:42:09.889124 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 09:42:09.889130 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 09:42:09.889137 kernel: ACPI: Added _OSI(Module Device) Dec 16 09:42:09.889144 kernel: ACPI: Added _OSI(Processor Device) Dec 16 09:42:09.889151 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 16 09:42:09.889157 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 09:42:09.889175 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 09:42:09.889182 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 16 09:42:09.889188 kernel: ACPI: Interpreter enabled Dec 16 09:42:09.889195 kernel: ACPI: PM: (supports S0 S5) Dec 16 09:42:09.889204 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 09:42:09.889211 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 09:42:09.889218 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 09:42:09.889225 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 09:42:09.889232 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 09:42:09.889397 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 09:42:09.889512 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 09:42:09.889621 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 09:42:09.889630 kernel: PCI host bridge to bus 0000:00 Dec 16 09:42:09.889737 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 09:42:09.889868 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 09:42:09.889966 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 09:42:09.890060 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 16 09:42:09.890176 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 09:42:09.890288 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 16 09:42:09.890382 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 09:42:09.890501 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 16 09:42:09.890612 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 16 09:42:09.890716 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 16 09:42:09.891530 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 16 09:42:09.891660 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 16 09:42:09.891791 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 16 09:42:09.891910 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 09:42:09.892033 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 16 09:42:09.892154 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 16 09:42:09.892481 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 16 09:42:09.892605 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 16 09:42:09.892738 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 16 09:42:09.894733 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 16 09:42:09.895017 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 16 09:42:09.895149 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 16 09:42:09.895290 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 16 09:42:09.895414 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 16 09:42:09.895543 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 16 09:42:09.895657 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 16 09:42:09.895803 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 16 09:42:09.895923 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 16 09:42:09.896087 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 16 09:42:09.896220 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 16 09:42:09.896340 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 16 09:42:09.896444 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 16 09:42:09.896552 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 16 09:42:09.896655 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 09:42:09.898612 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 16 09:42:09.898741 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 16 09:42:09.898957 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 16 09:42:09.899490 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 16 09:42:09.899600 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 16 09:42:09.899717 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 16 09:42:09.900440 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 16 09:42:09.900558 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 09:42:09.900781 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 16 09:42:09.900909 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 09:42:09.901020 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 09:42:09.901129 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:42:09.901268 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 16 09:42:09.901384 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 16 09:42:09.901493 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 09:42:09.901605 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 09:42:09.901714 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:42:09.901930 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 16 09:42:09.902089 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 16 09:42:09.902225 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 16 09:42:09.902337 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 09:42:09.902457 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 09:42:09.902575 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:42:09.902706 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 16 09:42:09.902863 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 09:42:09.902970 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 09:42:09.903071 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 09:42:09.903226 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:42:09.903378 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 16 09:42:09.903543 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 16 09:42:09.903705 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 09:42:09.903842 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 09:42:09.903951 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:42:09.904072 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 16 09:42:09.904201 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 16 09:42:09.904313 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 16 09:42:09.904417 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 09:42:09.904526 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 09:42:09.904629 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:42:09.904639 kernel: acpiphp: Slot [0] registered Dec 16 09:42:09.904752 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 16 09:42:09.904898 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 16 09:42:09.905007 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 16 09:42:09.905115 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 16 09:42:09.905235 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 09:42:09.905344 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 09:42:09.905446 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:42:09.905456 kernel: acpiphp: Slot [0-2] registered Dec 16 09:42:09.905556 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 09:42:09.905688 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 09:42:09.907867 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:42:09.907883 kernel: acpiphp: Slot [0-3] registered Dec 16 09:42:09.907996 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 09:42:09.908108 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 09:42:09.908224 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:42:09.908235 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 09:42:09.908242 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 09:42:09.908249 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 09:42:09.908256 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 09:42:09.908263 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 09:42:09.908269 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 09:42:09.908276 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 09:42:09.908287 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 09:42:09.908294 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 09:42:09.908301 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 09:42:09.908308 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 09:42:09.908315 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 09:42:09.908322 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 09:42:09.908328 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 09:42:09.908335 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 09:42:09.908342 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 09:42:09.908351 kernel: iommu: Default domain type: Translated Dec 16 09:42:09.908358 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 09:42:09.908365 kernel: PCI: Using ACPI for IRQ routing Dec 16 09:42:09.908372 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 09:42:09.908379 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 09:42:09.908385 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 16 09:42:09.908489 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 09:42:09.908591 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 09:42:09.908692 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 09:42:09.908705 kernel: vgaarb: loaded Dec 16 09:42:09.908712 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 09:42:09.908719 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 09:42:09.908726 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 09:42:09.908733 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 09:42:09.908740 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 09:42:09.908747 kernel: pnp: PnP ACPI init Dec 16 09:42:09.908945 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 09:42:09.908970 kernel: pnp: PnP ACPI: found 5 devices Dec 16 09:42:09.908978 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 09:42:09.908985 kernel: NET: Registered PF_INET protocol family Dec 16 09:42:09.908992 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 09:42:09.908999 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 09:42:09.909006 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 09:42:09.909012 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 09:42:09.909019 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 09:42:09.909026 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 09:42:09.909035 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 09:42:09.909042 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 09:42:09.909049 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 09:42:09.909056 kernel: NET: Registered PF_XDP protocol family Dec 16 09:42:09.909175 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 09:42:09.909282 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 09:42:09.909385 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 09:42:09.909493 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 16 09:42:09.909596 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 16 09:42:09.909718 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 16 09:42:09.909870 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 09:42:09.910004 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 09:42:09.910113 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:42:09.910235 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 09:42:09.910341 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 09:42:09.910453 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:42:09.910602 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 09:42:09.910748 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 09:42:09.913961 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:42:09.914135 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 09:42:09.914308 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 09:42:09.914476 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:42:09.914654 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 09:42:09.914890 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 09:42:09.915051 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:42:09.915210 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 09:42:09.915358 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 09:42:09.915517 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:42:09.915696 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 09:42:09.918074 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 16 09:42:09.918200 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 09:42:09.918305 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:42:09.918413 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 09:42:09.918514 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 16 09:42:09.918615 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 09:42:09.918716 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:42:09.918837 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 09:42:09.918940 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 16 09:42:09.919041 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 09:42:09.919148 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:42:09.919286 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 09:42:09.919385 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 09:42:09.919484 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 09:42:09.919578 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 16 09:42:09.919671 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 09:42:09.919780 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 16 09:42:09.919916 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 09:42:09.920025 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:42:09.920137 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 09:42:09.920259 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:42:09.920367 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 09:42:09.920465 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:42:09.920570 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 09:42:09.920668 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:42:09.920789 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 09:42:09.920899 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:42:09.921005 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 09:42:09.921103 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:42:09.921231 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 16 09:42:09.921332 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 09:42:09.921441 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:42:09.921556 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 16 09:42:09.921669 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 16 09:42:09.921808 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:42:09.921931 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 16 09:42:09.922031 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 09:42:09.922140 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:42:09.922150 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 09:42:09.922158 kernel: PCI: CLS 0 bytes, default 64 Dec 16 09:42:09.922192 kernel: Initialise system trusted keyrings Dec 16 09:42:09.922200 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 09:42:09.922207 kernel: Key type asymmetric registered Dec 16 09:42:09.922214 kernel: Asymmetric key parser 'x509' registered Dec 16 09:42:09.922221 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 16 09:42:09.922229 kernel: io scheduler mq-deadline registered Dec 16 09:42:09.922236 kernel: io scheduler kyber registered Dec 16 09:42:09.922243 kernel: io scheduler bfq registered Dec 16 09:42:09.922361 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 09:42:09.922481 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 09:42:09.922587 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 09:42:09.922702 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 09:42:09.922876 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 09:42:09.922984 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 09:42:09.923097 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 09:42:09.923219 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 09:42:09.923335 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 09:42:09.923453 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 09:42:09.923570 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 09:42:09.923674 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 09:42:09.923804 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 09:42:09.923918 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 09:42:09.924034 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 09:42:09.924147 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 09:42:09.924174 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 09:42:09.924304 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 16 09:42:09.924424 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 16 09:42:09.924434 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 09:42:09.924442 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 16 09:42:09.924450 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 09:42:09.924458 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 09:42:09.924465 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 09:42:09.924472 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 09:42:09.924480 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 09:42:09.924604 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 09:42:09.924616 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 09:42:09.924721 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 09:42:09.924835 kernel: rtc_cmos 00:03: setting system clock to 2024-12-16T09:42:09 UTC (1734342129) Dec 16 09:42:09.924942 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 09:42:09.924952 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 09:42:09.924960 kernel: NET: Registered PF_INET6 protocol family Dec 16 09:42:09.924967 kernel: Segment Routing with IPv6 Dec 16 09:42:09.924978 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 09:42:09.924986 kernel: NET: Registered PF_PACKET protocol family Dec 16 09:42:09.924993 kernel: Key type dns_resolver registered Dec 16 09:42:09.925001 kernel: IPI shorthand broadcast: enabled Dec 16 09:42:09.925008 kernel: sched_clock: Marking stable (1116007785, 140898544)->(1270592550, -13686221) Dec 16 09:42:09.925023 kernel: registered taskstats version 1 Dec 16 09:42:09.925030 kernel: Loading compiled-in X.509 certificates Dec 16 09:42:09.925037 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 16 09:42:09.925044 kernel: Key type .fscrypt registered Dec 16 09:42:09.925054 kernel: Key type fscrypt-provisioning registered Dec 16 09:42:09.925061 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 09:42:09.925069 kernel: ima: Allocated hash algorithm: sha1 Dec 16 09:42:09.925076 kernel: ima: No architecture policies found Dec 16 09:42:09.925084 kernel: clk: Disabling unused clocks Dec 16 09:42:09.925091 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 16 09:42:09.925099 kernel: Write protecting the kernel read-only data: 36864k Dec 16 09:42:09.925106 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 16 09:42:09.925114 kernel: Run /init as init process Dec 16 09:42:09.925123 kernel: with arguments: Dec 16 09:42:09.925131 kernel: /init Dec 16 09:42:09.925138 kernel: with environment: Dec 16 09:42:09.925152 kernel: HOME=/ Dec 16 09:42:09.925159 kernel: TERM=linux Dec 16 09:42:09.925175 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 16 09:42:09.925185 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 16 09:42:09.925194 systemd[1]: Detected virtualization kvm. Dec 16 09:42:09.925205 systemd[1]: Detected architecture x86-64. Dec 16 09:42:09.925212 systemd[1]: Running in initrd. Dec 16 09:42:09.925220 systemd[1]: No hostname configured, using default hostname. Dec 16 09:42:09.925227 systemd[1]: Hostname set to . Dec 16 09:42:09.925235 systemd[1]: Initializing machine ID from VM UUID. Dec 16 09:42:09.925243 systemd[1]: Queued start job for default target initrd.target. Dec 16 09:42:09.925250 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:42:09.925258 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:42:09.925268 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 09:42:09.925276 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 09:42:09.925284 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 09:42:09.925299 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 09:42:09.925308 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 09:42:09.925315 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 09:42:09.925323 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:42:09.925333 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:42:09.925341 systemd[1]: Reached target paths.target - Path Units. Dec 16 09:42:09.925349 systemd[1]: Reached target slices.target - Slice Units. Dec 16 09:42:09.925357 systemd[1]: Reached target swap.target - Swaps. Dec 16 09:42:09.925364 systemd[1]: Reached target timers.target - Timer Units. Dec 16 09:42:09.925372 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 09:42:09.925380 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 09:42:09.925388 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 09:42:09.925398 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 16 09:42:09.925406 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:42:09.925413 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 09:42:09.925427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:42:09.925435 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 09:42:09.925443 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 09:42:09.925451 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 09:42:09.925458 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 09:42:09.925466 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 09:42:09.925476 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 09:42:09.925483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 09:42:09.925491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:42:09.925518 systemd-journald[187]: Collecting audit messages is disabled. Dec 16 09:42:09.925539 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 09:42:09.925547 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:42:09.925555 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 09:42:09.925569 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 09:42:09.925580 systemd-journald[187]: Journal started Dec 16 09:42:09.925597 systemd-journald[187]: Runtime Journal (/run/log/journal/3db6e307a8944ed6aad7be5d77244063) is 4.8M, max 38.4M, 33.6M free. Dec 16 09:42:09.897090 systemd-modules-load[188]: Inserted module 'overlay' Dec 16 09:42:09.967874 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 09:42:09.967900 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 09:42:09.967913 kernel: Bridge firewalling registered Dec 16 09:42:09.935644 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 16 09:42:09.969305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 09:42:09.969979 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:42:09.971052 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 09:42:09.977939 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:42:09.979914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 09:42:09.982897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 09:42:09.983958 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 09:42:09.998541 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:42:10.004113 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:42:10.009885 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 09:42:10.010565 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:42:10.011714 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:42:10.017672 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 09:42:10.025092 dracut-cmdline[219]: dracut-dracut-053 Dec 16 09:42:10.029984 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:42:10.059027 systemd-resolved[226]: Positive Trust Anchors: Dec 16 09:42:10.059042 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 09:42:10.059072 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 09:42:10.067179 systemd-resolved[226]: Defaulting to hostname 'linux'. Dec 16 09:42:10.068418 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 09:42:10.069236 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:42:10.102786 kernel: SCSI subsystem initialized Dec 16 09:42:10.111793 kernel: Loading iSCSI transport class v2.0-870. Dec 16 09:42:10.120803 kernel: iscsi: registered transport (tcp) Dec 16 09:42:10.139840 kernel: iscsi: registered transport (qla4xxx) Dec 16 09:42:10.139876 kernel: QLogic iSCSI HBA Driver Dec 16 09:42:10.184648 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 09:42:10.189926 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 09:42:10.218186 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 09:42:10.218232 kernel: device-mapper: uevent: version 1.0.3 Dec 16 09:42:10.218790 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 16 09:42:10.260796 kernel: raid6: avx2x4 gen() 34513 MB/s Dec 16 09:42:10.277791 kernel: raid6: avx2x2 gen() 36669 MB/s Dec 16 09:42:10.294900 kernel: raid6: avx2x1 gen() 24268 MB/s Dec 16 09:42:10.294937 kernel: raid6: using algorithm avx2x2 gen() 36669 MB/s Dec 16 09:42:10.313006 kernel: raid6: .... xor() 30843 MB/s, rmw enabled Dec 16 09:42:10.313056 kernel: raid6: using avx2x2 recovery algorithm Dec 16 09:42:10.331796 kernel: xor: automatically using best checksumming function avx Dec 16 09:42:10.461815 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 09:42:10.475335 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 09:42:10.481933 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:42:10.498077 systemd-udevd[407]: Using default interface naming scheme 'v255'. Dec 16 09:42:10.502896 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:42:10.508914 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 09:42:10.527222 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Dec 16 09:42:10.562184 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 09:42:10.567930 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 09:42:10.637643 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:42:10.649828 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 09:42:10.674101 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 09:42:10.675919 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 09:42:10.677677 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:42:10.678954 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 09:42:10.687854 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 09:42:10.699844 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 09:42:10.727844 kernel: scsi host0: Virtio SCSI HBA Dec 16 09:42:10.736357 kernel: ACPI: bus type USB registered Dec 16 09:42:10.736393 kernel: usbcore: registered new interface driver usbfs Dec 16 09:42:10.737434 kernel: usbcore: registered new interface driver hub Dec 16 09:42:10.738419 kernel: usbcore: registered new device driver usb Dec 16 09:42:10.742083 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 16 09:42:10.778621 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 09:42:10.778736 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:42:10.781976 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:42:10.784533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:42:10.784649 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:42:10.785970 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:42:10.792575 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 09:42:10.796824 kernel: libata version 3.00 loaded. Dec 16 09:42:10.799391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:42:10.814804 kernel: AVX2 version of gcm_enc/dec engaged. Dec 16 09:42:10.816788 kernel: AES CTR mode by8 optimization enabled Dec 16 09:42:10.834818 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 09:42:10.843168 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 09:42:10.843194 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 16 09:42:10.843333 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 09:42:10.843454 kernel: scsi host1: ahci Dec 16 09:42:10.843589 kernel: scsi host2: ahci Dec 16 09:42:10.843730 kernel: scsi host3: ahci Dec 16 09:42:10.846948 kernel: scsi host4: ahci Dec 16 09:42:10.847079 kernel: scsi host5: ahci Dec 16 09:42:10.847215 kernel: scsi host6: ahci Dec 16 09:42:10.847350 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Dec 16 09:42:10.847361 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Dec 16 09:42:10.847370 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Dec 16 09:42:10.847383 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Dec 16 09:42:10.847393 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Dec 16 09:42:10.847401 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Dec 16 09:42:10.896220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:42:10.900909 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:42:10.914738 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:42:11.158489 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 09:42:11.158572 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 09:42:11.158592 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 09:42:11.158790 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 09:42:11.161821 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 09:42:11.165923 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 09:42:11.165961 kernel: ata1.00: applying bridge limits Dec 16 09:42:11.168827 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 09:42:11.171908 kernel: ata1.00: configured for UDMA/100 Dec 16 09:42:11.171968 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 09:42:11.203823 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 09:42:11.232805 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 16 09:42:11.232970 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 09:42:11.233098 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 09:42:11.233236 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 16 09:42:11.233365 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 16 09:42:11.250191 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 16 09:42:11.250367 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 16 09:42:11.250515 kernel: hub 1-0:1.0: USB hub found Dec 16 09:42:11.250695 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 09:42:11.251683 kernel: hub 1-0:1.0: 4 ports detected Dec 16 09:42:11.251852 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 16 09:42:11.251999 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 09:42:11.252207 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 09:42:11.252351 kernel: hub 2-0:1.0: USB hub found Dec 16 09:42:11.252492 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 09:42:11.252503 kernel: hub 2-0:1.0: 4 ports detected Dec 16 09:42:11.252630 kernel: GPT:17805311 != 80003071 Dec 16 09:42:11.252644 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 09:42:11.252653 kernel: GPT:17805311 != 80003071 Dec 16 09:42:11.252661 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 09:42:11.252670 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:42:11.252679 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 09:42:11.260517 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 09:42:11.267628 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 09:42:11.267642 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 16 09:42:11.294153 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (451) Dec 16 09:42:11.297780 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (455) Dec 16 09:42:11.310746 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 16 09:42:11.319879 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 16 09:42:11.323617 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 16 09:42:11.324406 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 16 09:42:11.329025 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 09:42:11.334904 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 09:42:11.340504 disk-uuid[573]: Primary Header is updated. Dec 16 09:42:11.340504 disk-uuid[573]: Secondary Entries is updated. Dec 16 09:42:11.340504 disk-uuid[573]: Secondary Header is updated. Dec 16 09:42:11.345808 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:42:11.351796 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:42:11.463797 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 09:42:11.600804 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 09:42:11.605103 kernel: usbcore: registered new interface driver usbhid Dec 16 09:42:11.605161 kernel: usbhid: USB HID core driver Dec 16 09:42:11.610157 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 16 09:42:11.610231 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 16 09:42:12.357914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:42:12.357982 disk-uuid[575]: The operation has completed successfully. Dec 16 09:42:12.434464 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 09:42:12.434619 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 09:42:12.446930 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 09:42:12.451127 sh[592]: Success Dec 16 09:42:12.463897 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 16 09:42:12.510072 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 09:42:12.516654 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 09:42:12.520187 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 09:42:12.535574 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 16 09:42:12.535613 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:42:12.535624 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 16 09:42:12.538538 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 09:42:12.538556 kernel: BTRFS info (device dm-0): using free space tree Dec 16 09:42:12.547782 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 09:42:12.549047 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 09:42:12.550053 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 09:42:12.555908 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 09:42:12.557903 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 09:42:12.569643 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:42:12.569674 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:42:12.569685 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:42:12.575160 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:42:12.575187 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:42:12.584895 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 16 09:42:12.588004 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:42:12.591133 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 09:42:12.598965 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 09:42:12.667037 ignition[686]: Ignition 2.19.0 Dec 16 09:42:12.667049 ignition[686]: Stage: fetch-offline Dec 16 09:42:12.669070 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 09:42:12.667084 ignition[686]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:42:12.667094 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:42:12.667185 ignition[686]: parsed url from cmdline: "" Dec 16 09:42:12.667191 ignition[686]: no config URL provided Dec 16 09:42:12.667208 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 09:42:12.667217 ignition[686]: no config at "/usr/lib/ignition/user.ign" Dec 16 09:42:12.667222 ignition[686]: failed to fetch config: resource requires networking Dec 16 09:42:12.667410 ignition[686]: Ignition finished successfully Dec 16 09:42:12.687277 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 09:42:12.696925 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 09:42:12.716489 systemd-networkd[778]: lo: Link UP Dec 16 09:42:12.716500 systemd-networkd[778]: lo: Gained carrier Dec 16 09:42:12.719121 systemd-networkd[778]: Enumeration completed Dec 16 09:42:12.719339 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 09:42:12.720305 systemd[1]: Reached target network.target - Network. Dec 16 09:42:12.720397 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:42:12.720400 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:42:12.722190 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:42:12.722194 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:42:12.723711 systemd-networkd[778]: eth0: Link UP Dec 16 09:42:12.723715 systemd-networkd[778]: eth0: Gained carrier Dec 16 09:42:12.723722 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:42:12.727434 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 09:42:12.727661 systemd-networkd[778]: eth1: Link UP Dec 16 09:42:12.727665 systemd-networkd[778]: eth1: Gained carrier Dec 16 09:42:12.727679 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:42:12.742717 ignition[780]: Ignition 2.19.0 Dec 16 09:42:12.742734 ignition[780]: Stage: fetch Dec 16 09:42:12.742899 ignition[780]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:42:12.742911 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:42:12.742992 ignition[780]: parsed url from cmdline: "" Dec 16 09:42:12.742996 ignition[780]: no config URL provided Dec 16 09:42:12.743001 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 09:42:12.743009 ignition[780]: no config at "/usr/lib/ignition/user.ign" Dec 16 09:42:12.743025 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 16 09:42:12.743160 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 09:42:12.754818 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 09:42:12.785809 systemd-networkd[778]: eth0: DHCPv4 address 157.90.156.134/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 09:42:12.943542 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 16 09:42:12.948353 ignition[780]: GET result: OK Dec 16 09:42:12.948989 ignition[780]: parsing config with SHA512: 134b8d34999ce2eda6fcb415f6a4a8a557fe0324babc3ca691c6121364d1fc0834aec1f498f3c730222c6da63ba4e80d40d3ee71aeca065b442f3519a340dc1e Dec 16 09:42:12.952779 unknown[780]: fetched base config from "system" Dec 16 09:42:12.953290 ignition[780]: fetch: fetch complete Dec 16 09:42:12.952801 unknown[780]: fetched base config from "system" Dec 16 09:42:12.953296 ignition[780]: fetch: fetch passed Dec 16 09:42:12.952810 unknown[780]: fetched user config from "hetzner" Dec 16 09:42:12.953344 ignition[780]: Ignition finished successfully Dec 16 09:42:12.959141 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 09:42:12.966970 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 09:42:12.983069 ignition[788]: Ignition 2.19.0 Dec 16 09:42:12.983081 ignition[788]: Stage: kargs Dec 16 09:42:12.983267 ignition[788]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:42:12.985925 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 09:42:12.983280 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:42:12.984039 ignition[788]: kargs: kargs passed Dec 16 09:42:12.995952 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 09:42:12.984089 ignition[788]: Ignition finished successfully Dec 16 09:42:13.011593 ignition[795]: Ignition 2.19.0 Dec 16 09:42:13.011608 ignition[795]: Stage: disks Dec 16 09:42:13.011801 ignition[795]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:42:13.014087 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 09:42:13.011814 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:42:13.016088 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 09:42:13.012528 ignition[795]: disks: disks passed Dec 16 09:42:13.017036 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 09:42:13.012570 ignition[795]: Ignition finished successfully Dec 16 09:42:13.018909 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 09:42:13.020616 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 09:42:13.021966 systemd[1]: Reached target basic.target - Basic System. Dec 16 09:42:13.035900 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 09:42:13.058311 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 16 09:42:13.061431 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 09:42:13.067980 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 09:42:13.168799 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 16 09:42:13.169342 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 09:42:13.170590 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 09:42:13.187920 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 09:42:13.190955 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 09:42:13.194061 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 09:42:13.195735 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 09:42:13.195913 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 09:42:13.207798 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) Dec 16 09:42:13.207854 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:42:13.206316 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 09:42:13.219278 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:42:13.219302 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:42:13.219319 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:42:13.219329 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:42:13.216169 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 09:42:13.226050 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 09:42:13.268177 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 09:42:13.270805 coreos-metadata[814]: Dec 16 09:42:13.269 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 16 09:42:13.271716 coreos-metadata[814]: Dec 16 09:42:13.270 INFO Fetch successful Dec 16 09:42:13.271716 coreos-metadata[814]: Dec 16 09:42:13.270 INFO wrote hostname ci-4081-2-1-e-12e77f9037 to /sysroot/etc/hostname Dec 16 09:42:13.271953 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 09:42:13.275996 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 16 09:42:13.280176 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 09:42:13.284519 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 09:42:13.370908 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 09:42:13.374850 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 09:42:13.376975 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 09:42:13.384784 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:42:13.411718 ignition[928]: INFO : Ignition 2.19.0 Dec 16 09:42:13.411718 ignition[928]: INFO : Stage: mount Dec 16 09:42:13.412934 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:42:13.412934 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:42:13.412934 ignition[928]: INFO : mount: mount passed Dec 16 09:42:13.412934 ignition[928]: INFO : Ignition finished successfully Dec 16 09:42:13.414740 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 09:42:13.415938 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 09:42:13.422877 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 09:42:13.534266 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 09:42:13.539079 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 09:42:13.552823 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Dec 16 09:42:13.556029 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:42:13.556069 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:42:13.558697 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:42:13.564368 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:42:13.564419 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:42:13.568437 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 09:42:13.610303 ignition[957]: INFO : Ignition 2.19.0 Dec 16 09:42:13.610303 ignition[957]: INFO : Stage: files Dec 16 09:42:13.611878 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:42:13.611878 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:42:13.611878 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Dec 16 09:42:13.614543 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 09:42:13.614543 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 09:42:13.616399 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 09:42:13.617285 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 09:42:13.618182 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 09:42:13.617817 unknown[957]: wrote ssh authorized keys file for user: core Dec 16 09:42:13.620940 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 16 09:42:13.622036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 16 09:42:13.721112 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 09:42:13.910224 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 16 09:42:13.910224 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 16 09:42:13.912600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 16 09:42:14.033937 systemd-networkd[778]: eth0: Gained IPv6LL Dec 16 09:42:14.098016 systemd-networkd[778]: eth1: Gained IPv6LL Dec 16 09:42:14.313720 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 09:42:14.564418 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 16 09:42:14.564418 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 09:42:14.566305 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 09:42:14.566305 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 09:42:14.566305 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 09:42:14.566305 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 09:42:14.566305 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 09:42:14.566305 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 09:42:14.566305 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 09:42:14.566305 ignition[957]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 16 09:42:14.566305 ignition[957]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 09:42:14.566305 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 09:42:14.574904 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 09:42:14.574904 ignition[957]: INFO : files: files passed Dec 16 09:42:14.574904 ignition[957]: INFO : Ignition finished successfully Dec 16 09:42:14.568784 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 09:42:14.575942 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 09:42:14.579907 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 09:42:14.580801 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 09:42:14.580905 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 09:42:14.592702 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:42:14.593583 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:42:14.594338 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:42:14.595356 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 09:42:14.596276 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 09:42:14.607013 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 09:42:14.650819 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 09:42:14.650968 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 09:42:14.652151 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 09:42:14.653384 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 09:42:14.654584 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 09:42:14.659875 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 09:42:14.671612 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 09:42:14.676896 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 09:42:14.687676 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:42:14.688931 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:42:14.689594 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 09:42:14.690660 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 09:42:14.690833 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 09:42:14.691870 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 09:42:14.692576 systemd[1]: Stopped target basic.target - Basic System. Dec 16 09:42:14.693807 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 09:42:14.694728 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 09:42:14.695688 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 09:42:14.696795 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 09:42:14.697866 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 09:42:14.699106 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 09:42:14.701369 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 09:42:14.702271 systemd[1]: Stopped target swap.target - Swaps. Dec 16 09:42:14.703119 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 09:42:14.703307 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 09:42:14.704454 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:42:14.705303 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:42:14.706316 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 09:42:14.707929 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:42:14.708886 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 09:42:14.709049 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 09:42:14.710604 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 09:42:14.710758 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 09:42:14.712092 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 09:42:14.712268 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 09:42:14.713386 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 09:42:14.713577 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 09:42:14.719983 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 09:42:14.722932 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 09:42:14.723843 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 09:42:14.723958 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:42:14.725947 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 09:42:14.726059 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 09:42:14.732893 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 09:42:14.732998 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 09:42:14.735233 ignition[1010]: INFO : Ignition 2.19.0 Dec 16 09:42:14.735233 ignition[1010]: INFO : Stage: umount Dec 16 09:42:14.735233 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:42:14.735233 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:42:14.745180 ignition[1010]: INFO : umount: umount passed Dec 16 09:42:14.745180 ignition[1010]: INFO : Ignition finished successfully Dec 16 09:42:14.737583 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 09:42:14.737706 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 09:42:14.738665 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 09:42:14.738743 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 09:42:14.739267 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 09:42:14.739313 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 09:42:14.744978 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 09:42:14.745048 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 09:42:14.746139 systemd[1]: Stopped target network.target - Network. Dec 16 09:42:14.749705 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 09:42:14.749804 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 09:42:14.750526 systemd[1]: Stopped target paths.target - Path Units. Dec 16 09:42:14.751893 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 09:42:14.755932 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:42:14.756602 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 09:42:14.757490 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 09:42:14.758487 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 09:42:14.758539 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 09:42:14.759034 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 09:42:14.759084 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 09:42:14.759545 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 09:42:14.759606 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 09:42:14.761881 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 09:42:14.761927 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 09:42:14.762557 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 09:42:14.763419 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 09:42:14.765739 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 09:42:14.766490 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 09:42:14.766608 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 09:42:14.766814 systemd-networkd[778]: eth0: DHCPv6 lease lost Dec 16 09:42:14.768207 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 09:42:14.768335 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 09:42:14.770878 systemd-networkd[778]: eth1: DHCPv6 lease lost Dec 16 09:42:14.772193 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 09:42:14.772353 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 09:42:14.774505 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 09:42:14.774571 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:42:14.775894 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 09:42:14.775942 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 09:42:14.783896 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 09:42:14.784511 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 09:42:14.784590 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 09:42:14.785824 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 09:42:14.785883 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:42:14.787026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 09:42:14.787139 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 09:42:14.788679 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 09:42:14.788877 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:42:14.790519 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:42:14.803054 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 09:42:14.803197 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 09:42:14.805067 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 09:42:14.805269 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:42:14.806914 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 09:42:14.806982 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 09:42:14.807604 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 09:42:14.807656 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:42:14.808801 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 09:42:14.808874 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 09:42:14.810456 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 09:42:14.810504 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 09:42:14.811641 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 09:42:14.811691 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:42:14.817949 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 09:42:14.819117 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 09:42:14.819841 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:42:14.821091 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 09:42:14.821145 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 09:42:14.822336 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 09:42:14.822385 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:42:14.823860 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:42:14.823912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:42:14.828882 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 09:42:14.829016 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 09:42:14.830602 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 09:42:14.835965 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 09:42:14.844635 systemd[1]: Switching root. Dec 16 09:42:14.875137 systemd-journald[187]: Journal stopped Dec 16 09:42:15.846474 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 16 09:42:15.846545 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 09:42:15.846557 kernel: SELinux: policy capability open_perms=1 Dec 16 09:42:15.846567 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 09:42:15.846580 kernel: SELinux: policy capability always_check_network=0 Dec 16 09:42:15.846589 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 09:42:15.846599 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 09:42:15.846608 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 09:42:15.846618 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 09:42:15.846627 kernel: audit: type=1403 audit(1734342135.023:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 09:42:15.846638 systemd[1]: Successfully loaded SELinux policy in 42.886ms. Dec 16 09:42:15.846665 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.252ms. Dec 16 09:42:15.846677 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 16 09:42:15.846694 systemd[1]: Detected virtualization kvm. Dec 16 09:42:15.846709 systemd[1]: Detected architecture x86-64. Dec 16 09:42:15.846719 systemd[1]: Detected first boot. Dec 16 09:42:15.846729 systemd[1]: Hostname set to . Dec 16 09:42:15.846739 systemd[1]: Initializing machine ID from VM UUID. Dec 16 09:42:15.846754 zram_generator::config[1053]: No configuration found. Dec 16 09:42:15.847310 systemd[1]: Populated /etc with preset unit settings. Dec 16 09:42:15.847325 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 09:42:15.847339 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 09:42:15.847349 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 09:42:15.847360 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 09:42:15.847371 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 09:42:15.847381 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 09:42:15.847391 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 09:42:15.847401 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 09:42:15.847412 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 09:42:15.847424 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 09:42:15.847434 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 09:42:15.847444 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:42:15.847456 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:42:15.847466 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 09:42:15.847476 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 09:42:15.847486 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 09:42:15.847497 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 09:42:15.847507 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 09:42:15.847519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:42:15.847530 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 09:42:15.847540 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 09:42:15.847561 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 09:42:15.847571 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 09:42:15.847581 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:42:15.847593 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 09:42:15.847603 systemd[1]: Reached target slices.target - Slice Units. Dec 16 09:42:15.847614 systemd[1]: Reached target swap.target - Swaps. Dec 16 09:42:15.847624 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 09:42:15.847638 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 09:42:15.847651 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:42:15.847661 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 09:42:15.847672 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:42:15.847682 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 09:42:15.847693 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 09:42:15.847706 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 09:42:15.847716 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 09:42:15.847732 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:42:15.847745 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 09:42:15.847755 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 09:42:15.847783 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 09:42:15.847794 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 09:42:15.847805 systemd[1]: Reached target machines.target - Containers. Dec 16 09:42:15.847815 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 09:42:15.847826 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:42:15.847836 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 09:42:15.847846 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 09:42:15.847856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:42:15.847866 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 09:42:15.847879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:42:15.847889 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 09:42:15.847899 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:42:15.847909 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 09:42:15.847919 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 09:42:15.847930 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 09:42:15.847941 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 09:42:15.847951 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 09:42:15.847963 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 09:42:15.847974 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 09:42:15.847984 kernel: fuse: init (API version 7.39) Dec 16 09:42:15.847994 kernel: loop: module loaded Dec 16 09:42:15.848004 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 09:42:15.848014 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 09:42:15.848024 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 09:42:15.848034 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 09:42:15.848062 systemd-journald[1133]: Collecting audit messages is disabled. Dec 16 09:42:15.848087 systemd[1]: Stopped verity-setup.service. Dec 16 09:42:15.848102 systemd-journald[1133]: Journal started Dec 16 09:42:15.848121 systemd-journald[1133]: Runtime Journal (/run/log/journal/3db6e307a8944ed6aad7be5d77244063) is 4.8M, max 38.4M, 33.6M free. Dec 16 09:42:15.570142 systemd[1]: Queued start job for default target multi-user.target. Dec 16 09:42:15.588951 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 09:42:15.589577 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 09:42:15.863692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:42:15.863743 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 09:42:15.868093 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 09:42:15.869716 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 09:42:15.870532 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 09:42:15.872055 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 09:42:15.872802 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 09:42:15.873676 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 09:42:15.877472 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 09:42:15.878491 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:42:15.879443 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 09:42:15.879614 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 09:42:15.880613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:42:15.880802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:42:15.882002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:42:15.882256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:42:15.883612 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 09:42:15.884999 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 09:42:15.885958 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:42:15.886158 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:42:15.887303 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 09:42:15.889441 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 09:42:15.890325 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 09:42:15.899806 kernel: ACPI: bus type drm_connector registered Dec 16 09:42:15.900570 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 09:42:15.901341 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 09:42:15.911299 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 09:42:15.918555 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 09:42:15.923702 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 09:42:15.925025 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 09:42:15.925111 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 09:42:15.927597 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 16 09:42:15.934252 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 09:42:15.938870 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 09:42:15.939445 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:42:15.945910 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 09:42:15.950776 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 09:42:15.951571 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:42:15.954741 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 09:42:15.955285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 09:42:15.956953 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 09:42:15.959871 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 09:42:15.961369 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 09:42:15.964272 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 09:42:15.964887 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 09:42:15.965595 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 09:42:16.009552 systemd-journald[1133]: Time spent on flushing to /var/log/journal/3db6e307a8944ed6aad7be5d77244063 is 23.395ms for 1135 entries. Dec 16 09:42:16.009552 systemd-journald[1133]: System Journal (/var/log/journal/3db6e307a8944ed6aad7be5d77244063) is 8.0M, max 584.8M, 576.8M free. Dec 16 09:42:16.054601 systemd-journald[1133]: Received client request to flush runtime journal. Dec 16 09:42:16.058372 kernel: loop0: detected capacity change from 0 to 140768 Dec 16 09:42:16.017832 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 09:42:16.020459 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 09:42:16.031028 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 16 09:42:16.034197 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:42:16.065585 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 09:42:16.062683 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 09:42:16.077579 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:42:16.085966 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 16 09:42:16.087330 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 09:42:16.087951 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 16 09:42:16.094148 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 16 09:42:16.099508 kernel: loop1: detected capacity change from 0 to 210664 Dec 16 09:42:16.094165 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 16 09:42:16.098478 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 16 09:42:16.109955 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 09:42:16.117926 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 09:42:16.140806 kernel: loop2: detected capacity change from 0 to 8 Dec 16 09:42:16.155128 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 09:42:16.164996 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 09:42:16.168430 kernel: loop3: detected capacity change from 0 to 142488 Dec 16 09:42:16.188397 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 16 09:42:16.188710 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 16 09:42:16.194287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:42:16.228859 kernel: loop4: detected capacity change from 0 to 140768 Dec 16 09:42:16.253804 kernel: loop5: detected capacity change from 0 to 210664 Dec 16 09:42:16.276799 kernel: loop6: detected capacity change from 0 to 8 Dec 16 09:42:16.280810 kernel: loop7: detected capacity change from 0 to 142488 Dec 16 09:42:16.305195 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 16 09:42:16.306604 (sd-merge)[1201]: Merged extensions into '/usr'. Dec 16 09:42:16.313380 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 09:42:16.313407 systemd[1]: Reloading... Dec 16 09:42:16.414818 zram_generator::config[1236]: No configuration found. Dec 16 09:42:16.518112 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:42:16.534591 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 09:42:16.559693 systemd[1]: Reloading finished in 245 ms. Dec 16 09:42:16.584896 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 09:42:16.588102 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 09:42:16.597952 systemd[1]: Starting ensure-sysext.service... Dec 16 09:42:16.600320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 09:42:16.619896 systemd[1]: Reloading requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Dec 16 09:42:16.619910 systemd[1]: Reloading... Dec 16 09:42:16.621378 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 09:42:16.621741 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 09:42:16.622663 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 09:42:16.622938 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Dec 16 09:42:16.623016 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Dec 16 09:42:16.628324 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 09:42:16.628340 systemd-tmpfiles[1271]: Skipping /boot Dec 16 09:42:16.647728 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 09:42:16.647958 systemd-tmpfiles[1271]: Skipping /boot Dec 16 09:42:16.704829 zram_generator::config[1301]: No configuration found. Dec 16 09:42:16.799231 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:42:16.839929 systemd[1]: Reloading finished in 219 ms. Dec 16 09:42:16.857083 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 09:42:16.861134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:42:16.871069 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 16 09:42:16.875932 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 09:42:16.885029 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 09:42:16.890935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 09:42:16.899132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:42:16.902116 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 09:42:16.915717 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 09:42:16.917397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:42:16.917546 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:42:16.920509 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:42:16.923953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:42:16.927944 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:42:16.928577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:42:16.928662 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:42:16.929475 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 09:42:16.934503 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 09:42:16.938980 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:42:16.939136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:42:16.939284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:42:16.939486 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:42:16.942396 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:42:16.942570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:42:16.950002 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 09:42:16.950916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:42:16.951025 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:42:16.952736 systemd[1]: Finished ensure-sysext.service. Dec 16 09:42:16.962935 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 09:42:16.973378 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 09:42:16.978948 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 09:42:17.002657 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Dec 16 09:42:17.003872 augenrules[1377]: No rules Dec 16 09:42:17.014834 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 16 09:42:17.015645 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 09:42:17.017112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:42:17.017286 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:42:17.018567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:42:17.018719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:42:17.019715 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:42:17.020507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:42:17.021368 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 09:42:17.022184 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 09:42:17.022384 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 09:42:17.032398 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:42:17.032914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 09:42:17.032945 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 09:42:17.051867 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:42:17.062941 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 09:42:17.090090 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 09:42:17.091927 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 09:42:17.111118 systemd-resolved[1353]: Positive Trust Anchors: Dec 16 09:42:17.112808 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 09:42:17.112838 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 09:42:17.118308 systemd-resolved[1353]: Using system hostname 'ci-4081-2-1-e-12e77f9037'. Dec 16 09:42:17.120418 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 09:42:17.121090 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:42:17.135502 systemd-networkd[1396]: lo: Link UP Dec 16 09:42:17.135794 systemd-networkd[1396]: lo: Gained carrier Dec 16 09:42:17.137602 systemd-networkd[1396]: Enumeration completed Dec 16 09:42:17.137741 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 09:42:17.139318 systemd[1]: Reached target network.target - Network. Dec 16 09:42:17.145953 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 09:42:17.155954 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 09:42:17.176791 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1404) Dec 16 09:42:17.191801 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1404) Dec 16 09:42:17.221981 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:42:17.221990 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:42:17.225243 systemd-networkd[1396]: eth0: Link UP Dec 16 09:42:17.225355 systemd-networkd[1396]: eth0: Gained carrier Dec 16 09:42:17.225615 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:42:17.237328 systemd-networkd[1396]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:42:17.237338 systemd-networkd[1396]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:42:17.242247 systemd-networkd[1396]: eth1: Link UP Dec 16 09:42:17.242529 systemd-networkd[1396]: eth1: Gained carrier Dec 16 09:42:17.242598 systemd-networkd[1396]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:42:17.251793 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1397) Dec 16 09:42:17.264578 systemd-networkd[1396]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 09:42:17.265862 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Dec 16 09:42:17.285828 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 09:42:17.287832 systemd-networkd[1396]: eth0: DHCPv4 address 157.90.156.134/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 09:42:17.288530 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Dec 16 09:42:17.292826 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 16 09:42:17.296800 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 16 09:42:17.301053 kernel: Console: switching to colour dummy device 80x25 Dec 16 09:42:17.303041 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 09:42:17.303077 kernel: [drm] features: -context_init Dec 16 09:42:17.305782 kernel: ACPI: button: Power Button [PWRF] Dec 16 09:42:17.305816 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 09:42:17.313752 kernel: [drm] number of scanouts: 1 Dec 16 09:42:17.313842 kernel: [drm] number of cap sets: 0 Dec 16 09:42:17.313262 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 09:42:17.315310 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 16 09:42:17.315603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:42:17.315709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:42:17.316802 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 16 09:42:17.320909 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:42:17.323166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:42:17.328876 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 16 09:42:17.328913 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 09:42:17.327731 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:42:17.338517 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 09:42:17.342922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:42:17.349955 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 09:42:17.350062 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 09:42:17.350079 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:42:17.350502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:42:17.350662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:42:17.367207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:42:17.367390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:42:17.371180 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:42:17.377360 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:42:17.377545 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:42:17.379522 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 09:42:17.388852 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Dec 16 09:42:17.397160 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 09:42:17.414487 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 09:42:17.419008 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 16 09:42:17.419192 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 09:42:17.421854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:42:17.427795 kernel: EDAC MC: Ver: 3.0.0 Dec 16 09:42:17.431315 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:42:17.432775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:42:17.445892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:42:17.452077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:42:17.452340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:42:17.461927 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:42:17.521302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:42:17.589925 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 16 09:42:17.597085 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 16 09:42:17.618438 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 16 09:42:17.655541 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 16 09:42:17.656081 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:42:17.656237 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 09:42:17.656533 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 09:42:17.657385 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 09:42:17.661406 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 09:42:17.662984 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 09:42:17.664447 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 09:42:17.665439 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 09:42:17.665587 systemd[1]: Reached target paths.target - Path Units. Dec 16 09:42:17.666717 systemd[1]: Reached target timers.target - Timer Units. Dec 16 09:42:17.674689 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 09:42:17.679027 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 09:42:17.694841 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 09:42:17.707050 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 16 09:42:17.709689 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 09:42:17.712217 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 09:42:17.714085 systemd[1]: Reached target basic.target - Basic System. Dec 16 09:42:17.714377 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 16 09:42:17.716024 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 09:42:17.716082 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 09:42:17.722928 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 09:42:17.734918 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 09:42:17.742171 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 09:42:17.745734 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 09:42:17.753934 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 09:42:17.757304 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 09:42:17.759947 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 09:42:17.763873 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 09:42:17.768954 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 16 09:42:17.771911 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 09:42:17.775580 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 09:42:17.778874 jq[1466]: false Dec 16 09:42:17.793554 coreos-metadata[1462]: Dec 16 09:42:17.779 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 16 09:42:17.793554 coreos-metadata[1462]: Dec 16 09:42:17.780 INFO Fetch successful Dec 16 09:42:17.793554 coreos-metadata[1462]: Dec 16 09:42:17.780 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 16 09:42:17.793554 coreos-metadata[1462]: Dec 16 09:42:17.781 INFO Fetch successful Dec 16 09:42:17.789934 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 09:42:17.792306 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 09:42:17.792716 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 09:42:17.803936 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 09:42:17.807874 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 09:42:17.810019 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 16 09:42:17.813922 extend-filesystems[1467]: Found loop4 Dec 16 09:42:17.815224 extend-filesystems[1467]: Found loop5 Dec 16 09:42:17.815224 extend-filesystems[1467]: Found loop6 Dec 16 09:42:17.815224 extend-filesystems[1467]: Found loop7 Dec 16 09:42:17.815224 extend-filesystems[1467]: Found sda Dec 16 09:42:17.815224 extend-filesystems[1467]: Found sda1 Dec 16 09:42:17.815224 extend-filesystems[1467]: Found sda2 Dec 16 09:42:17.815224 extend-filesystems[1467]: Found sda3 Dec 16 09:42:17.815224 extend-filesystems[1467]: Found usr Dec 16 09:42:17.815224 extend-filesystems[1467]: Found sda4 Dec 16 09:42:17.815224 extend-filesystems[1467]: Found sda6 Dec 16 09:42:17.833240 extend-filesystems[1467]: Found sda7 Dec 16 09:42:17.833240 extend-filesystems[1467]: Found sda9 Dec 16 09:42:17.833240 extend-filesystems[1467]: Checking size of /dev/sda9 Dec 16 09:42:17.827323 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 09:42:17.828590 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 09:42:17.830241 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 09:42:17.830437 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 09:42:17.846240 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 09:42:17.869814 jq[1483]: true Dec 16 09:42:17.846885 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 09:42:17.856180 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 09:42:17.889272 extend-filesystems[1467]: Resized partition /dev/sda9 Dec 16 09:42:17.892144 jq[1493]: true Dec 16 09:42:17.902844 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Dec 16 09:42:17.907380 dbus-daemon[1465]: [system] SELinux support is enabled Dec 16 09:42:17.940370 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 16 09:42:17.940405 update_engine[1481]: I20241216 09:42:17.911986 1481 main.cc:92] Flatcar Update Engine starting Dec 16 09:42:17.908602 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 09:42:17.926913 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 09:42:17.926958 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 09:42:17.927412 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 09:42:17.927433 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 09:42:17.952371 update_engine[1481]: I20241216 09:42:17.946919 1481 update_check_scheduler.cc:74] Next update check in 5m16s Dec 16 09:42:17.949011 systemd[1]: Started update-engine.service - Update Engine. Dec 16 09:42:17.968220 tar[1490]: linux-amd64/helm Dec 16 09:42:17.971699 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 09:42:18.015822 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1406) Dec 16 09:42:18.046006 systemd-logind[1474]: New seat seat0. Dec 16 09:42:18.055519 systemd-logind[1474]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 09:42:18.055546 systemd-logind[1474]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 09:42:18.055976 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 09:42:18.058065 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 09:42:18.060947 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 09:42:18.068255 bash[1535]: Updated "/home/core/.ssh/authorized_keys" Dec 16 09:42:18.068946 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 09:42:18.083353 systemd[1]: Starting sshkeys.service... Dec 16 09:42:18.095559 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 16 09:42:18.107527 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 09:42:18.117223 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 09:42:18.126232 extend-filesystems[1507]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 09:42:18.126232 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 16 09:42:18.126232 extend-filesystems[1507]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 16 09:42:18.140531 extend-filesystems[1467]: Resized filesystem in /dev/sda9 Dec 16 09:42:18.140531 extend-filesystems[1467]: Found sr0 Dec 16 09:42:18.129665 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 09:42:18.129871 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 09:42:18.178295 coreos-metadata[1540]: Dec 16 09:42:18.176 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 16 09:42:18.184947 coreos-metadata[1540]: Dec 16 09:42:18.184 INFO Fetch successful Dec 16 09:42:18.189960 unknown[1540]: wrote ssh authorized keys file for user: core Dec 16 09:42:18.232159 containerd[1492]: time="2024-12-16T09:42:18.231501344Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 16 09:42:18.240285 update-ssh-keys[1547]: Updated "/home/core/.ssh/authorized_keys" Dec 16 09:42:18.242489 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 09:42:18.247864 systemd[1]: Finished sshkeys.service. Dec 16 09:42:18.283786 containerd[1492]: time="2024-12-16T09:42:18.282749890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:42:18.286781 containerd[1492]: time="2024-12-16T09:42:18.286740122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:42:18.286855 containerd[1492]: time="2024-12-16T09:42:18.286840100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 16 09:42:18.286905 containerd[1492]: time="2024-12-16T09:42:18.286893530Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 16 09:42:18.287106 containerd[1492]: time="2024-12-16T09:42:18.287090209Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 16 09:42:18.287628 containerd[1492]: time="2024-12-16T09:42:18.287612017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 16 09:42:18.287777 containerd[1492]: time="2024-12-16T09:42:18.287742812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:42:18.287840 containerd[1492]: time="2024-12-16T09:42:18.287826699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:42:18.288054 containerd[1492]: time="2024-12-16T09:42:18.288035771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:42:18.288693 containerd[1492]: time="2024-12-16T09:42:18.288676623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 16 09:42:18.288749 containerd[1492]: time="2024-12-16T09:42:18.288735323Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:42:18.288822 containerd[1492]: time="2024-12-16T09:42:18.288809452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 16 09:42:18.289693 containerd[1492]: time="2024-12-16T09:42:18.289676207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:42:18.289993 containerd[1492]: time="2024-12-16T09:42:18.289975599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:42:18.290279 containerd[1492]: time="2024-12-16T09:42:18.290248781Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:42:18.290582 containerd[1492]: time="2024-12-16T09:42:18.290568741Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 16 09:42:18.290796 containerd[1492]: time="2024-12-16T09:42:18.290746344Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 16 09:42:18.290922 containerd[1492]: time="2024-12-16T09:42:18.290905523Z" level=info msg="metadata content store policy set" policy=shared Dec 16 09:42:18.291847 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 09:42:18.294531 containerd[1492]: time="2024-12-16T09:42:18.294513488Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.294806768Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.294832195Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.294846041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.294857743Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.294971146Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.295129904Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.295235712Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.295249678Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.295260077Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.295287399Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.295298309Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.295308469Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.295318938Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 16 09:42:18.295786 containerd[1492]: time="2024-12-16T09:42:18.295329428Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295339757Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295349385Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295357971Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295373951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295384711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295394449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295404588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295414567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295426169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295435186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295445125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295455043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296011 containerd[1492]: time="2024-12-16T09:42:18.295474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296407 containerd[1492]: time="2024-12-16T09:42:18.296391068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296820 containerd[1492]: time="2024-12-16T09:42:18.296688556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296820 containerd[1492]: time="2024-12-16T09:42:18.296706540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296820 containerd[1492]: time="2024-12-16T09:42:18.296719635Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 16 09:42:18.296820 containerd[1492]: time="2024-12-16T09:42:18.296736537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296820 containerd[1492]: time="2024-12-16T09:42:18.296750753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.296820 containerd[1492]: time="2024-12-16T09:42:18.296777162Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 16 09:42:18.296960 containerd[1492]: time="2024-12-16T09:42:18.296945067Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 16 09:42:18.297072 containerd[1492]: time="2024-12-16T09:42:18.297056426Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 16 09:42:18.297119 containerd[1492]: time="2024-12-16T09:42:18.297108524Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 16 09:42:18.297216 containerd[1492]: time="2024-12-16T09:42:18.297200887Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 16 09:42:18.298029 containerd[1492]: time="2024-12-16T09:42:18.297250580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.298029 containerd[1492]: time="2024-12-16T09:42:18.297284313Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 16 09:42:18.298029 containerd[1492]: time="2024-12-16T09:42:18.297294562Z" level=info msg="NRI interface is disabled by configuration." Dec 16 09:42:18.298029 containerd[1492]: time="2024-12-16T09:42:18.297304331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 16 09:42:18.298112 containerd[1492]: time="2024-12-16T09:42:18.297503885Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 16 09:42:18.298112 containerd[1492]: time="2024-12-16T09:42:18.297549260Z" level=info msg="Connect containerd service" Dec 16 09:42:18.298112 containerd[1492]: time="2024-12-16T09:42:18.297575730Z" level=info msg="using legacy CRI server" Dec 16 09:42:18.298112 containerd[1492]: time="2024-12-16T09:42:18.297581771Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 09:42:18.298112 containerd[1492]: time="2024-12-16T09:42:18.297646993Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 16 09:42:18.299113 containerd[1492]: time="2024-12-16T09:42:18.299066145Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 09:42:18.300219 containerd[1492]: time="2024-12-16T09:42:18.299890370Z" level=info msg="Start subscribing containerd event" Dec 16 09:42:18.300219 containerd[1492]: time="2024-12-16T09:42:18.299935515Z" level=info msg="Start recovering state" Dec 16 09:42:18.300219 containerd[1492]: time="2024-12-16T09:42:18.299986961Z" level=info msg="Start event monitor" Dec 16 09:42:18.300219 containerd[1492]: time="2024-12-16T09:42:18.300001068Z" level=info msg="Start snapshots syncer" Dec 16 09:42:18.300219 containerd[1492]: time="2024-12-16T09:42:18.300008772Z" level=info msg="Start cni network conf syncer for default" Dec 16 09:42:18.300219 containerd[1492]: time="2024-12-16T09:42:18.300015415Z" level=info msg="Start streaming server" Dec 16 09:42:18.300494 containerd[1492]: time="2024-12-16T09:42:18.300477541Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 09:42:18.300957 containerd[1492]: time="2024-12-16T09:42:18.300676605Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 09:42:18.301010 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 09:42:18.304538 containerd[1492]: time="2024-12-16T09:42:18.303616707Z" level=info msg="containerd successfully booted in 0.072885s" Dec 16 09:42:18.361003 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 09:42:18.382862 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 09:42:18.393958 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 09:42:18.402575 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 09:42:18.402826 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 09:42:18.413939 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 09:42:18.427400 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 09:42:18.439229 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 09:42:18.443909 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 09:42:18.444394 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 09:42:18.539261 tar[1490]: linux-amd64/LICENSE Dec 16 09:42:18.539429 tar[1490]: linux-amd64/README.md Dec 16 09:42:18.550956 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 09:42:18.705976 systemd-networkd[1396]: eth0: Gained IPv6LL Dec 16 09:42:18.706545 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Dec 16 09:42:18.709265 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 09:42:18.712138 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 09:42:18.721991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:42:18.726860 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 09:42:18.760528 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 09:42:18.962008 systemd-networkd[1396]: eth1: Gained IPv6LL Dec 16 09:42:18.962452 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Dec 16 09:42:19.487013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:42:19.488368 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 09:42:19.492940 systemd[1]: Startup finished in 1.237s (kernel) + 5.333s (initrd) + 4.510s (userspace) = 11.082s. Dec 16 09:42:19.495224 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:42:20.052127 kubelet[1592]: E1216 09:42:20.052038 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:42:20.056006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:42:20.056216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:42:30.306511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 09:42:30.311961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:42:30.444985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:42:30.448452 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:42:30.490019 kubelet[1612]: E1216 09:42:30.489947 1612 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:42:30.496500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:42:30.496726 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:42:40.747174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 09:42:40.752938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:42:40.896680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:42:40.909051 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:42:40.947753 kubelet[1628]: E1216 09:42:40.947690 1628 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:42:40.950515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:42:40.950735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:42:49.706767 systemd-timesyncd[1371]: Contacted time server 94.130.184.193:123 (2.flatcar.pool.ntp.org). Dec 16 09:42:49.706839 systemd-timesyncd[1371]: Initial clock synchronization to Mon 2024-12-16 09:42:49.706536 UTC. Dec 16 09:42:49.707532 systemd-resolved[1353]: Clock change detected. Flushing caches. Dec 16 09:42:51.701620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 09:42:51.708516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:42:51.842224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:42:51.846658 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:42:51.883811 kubelet[1644]: E1216 09:42:51.883734 1644 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:42:51.887379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:42:51.887617 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:43:01.951795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 09:43:01.957568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:43:02.111047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:43:02.125773 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:43:02.168027 kubelet[1660]: E1216 09:43:02.167982 1660 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:43:02.170899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:43:02.171107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:43:04.050430 update_engine[1481]: I20241216 09:43:04.050284 1481 update_attempter.cc:509] Updating boot flags... Dec 16 09:43:04.099433 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1677) Dec 16 09:43:04.148469 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1677) Dec 16 09:43:04.194183 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1677) Dec 16 09:43:12.201532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 09:43:12.206776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:43:12.350283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:43:12.354787 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:43:12.391653 kubelet[1697]: E1216 09:43:12.391576 1697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:43:12.395837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:43:12.396038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:43:22.451567 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 16 09:43:22.456695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:43:22.583070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:43:22.586870 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:43:22.618167 kubelet[1713]: E1216 09:43:22.618103 1713 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:43:22.622315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:43:22.622543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:43:32.701947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 16 09:43:32.713702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:43:32.872136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:43:32.875910 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:43:32.909450 kubelet[1730]: E1216 09:43:32.909394 1730 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:43:32.913506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:43:32.913710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:43:42.951538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 16 09:43:42.957525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:43:43.074608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:43:43.079663 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:43:43.113177 kubelet[1746]: E1216 09:43:43.113120 1746 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:43:43.116415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:43:43.116603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:43:53.201559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 16 09:43:53.206557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:43:53.347991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:43:53.359715 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:43:53.394777 kubelet[1762]: E1216 09:43:53.394713 1762 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:43:53.398748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:43:53.398954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:44:03.451800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 16 09:44:03.457580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:44:03.617479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:44:03.621208 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:44:03.655070 kubelet[1779]: E1216 09:44:03.655018 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:44:03.659615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:44:03.659806 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:44:13.701678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 16 09:44:13.708032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:44:13.838259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:44:13.842137 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:44:13.878740 kubelet[1796]: E1216 09:44:13.878691 1796 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:44:13.882808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:44:13.883014 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:44:19.536923 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 09:44:19.547821 systemd[1]: Started sshd@0-157.90.156.134:22-147.75.109.163:39564.service - OpenSSH per-connection server daemon (147.75.109.163:39564). Dec 16 09:44:20.521656 sshd[1805]: Accepted publickey for core from 147.75.109.163 port 39564 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:20.524421 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:20.536910 systemd-logind[1474]: New session 1 of user core. Dec 16 09:44:20.538439 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 09:44:20.544649 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 09:44:20.560513 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 09:44:20.567669 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 09:44:20.582258 (systemd)[1809]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 09:44:20.684302 systemd[1809]: Queued start job for default target default.target. Dec 16 09:44:20.692032 systemd[1809]: Created slice app.slice - User Application Slice. Dec 16 09:44:20.692066 systemd[1809]: Reached target paths.target - Paths. Dec 16 09:44:20.692097 systemd[1809]: Reached target timers.target - Timers. Dec 16 09:44:20.694117 systemd[1809]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 09:44:20.708487 systemd[1809]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 09:44:20.709291 systemd[1809]: Reached target sockets.target - Sockets. Dec 16 09:44:20.709319 systemd[1809]: Reached target basic.target - Basic System. Dec 16 09:44:20.709395 systemd[1809]: Reached target default.target - Main User Target. Dec 16 09:44:20.709443 systemd[1809]: Startup finished in 119ms. Dec 16 09:44:20.710227 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 09:44:20.717537 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 09:44:21.404192 systemd[1]: Started sshd@1-157.90.156.134:22-147.75.109.163:39572.service - OpenSSH per-connection server daemon (147.75.109.163:39572). Dec 16 09:44:22.387267 sshd[1820]: Accepted publickey for core from 147.75.109.163 port 39572 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:22.389885 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:22.399480 systemd-logind[1474]: New session 2 of user core. Dec 16 09:44:22.415608 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 09:44:23.077057 sshd[1820]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:23.080922 systemd-logind[1474]: Session 2 logged out. Waiting for processes to exit. Dec 16 09:44:23.081740 systemd[1]: sshd@1-157.90.156.134:22-147.75.109.163:39572.service: Deactivated successfully. Dec 16 09:44:23.083768 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 09:44:23.084707 systemd-logind[1474]: Removed session 2. Dec 16 09:44:23.251808 systemd[1]: Started sshd@2-157.90.156.134:22-147.75.109.163:39580.service - OpenSSH per-connection server daemon (147.75.109.163:39580). Dec 16 09:44:23.951859 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 16 09:44:23.959540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:44:24.112140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:44:24.116187 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:44:24.149042 kubelet[1837]: E1216 09:44:24.148966 1837 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:44:24.153219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:44:24.153447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:44:24.243208 sshd[1827]: Accepted publickey for core from 147.75.109.163 port 39580 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:24.245659 sshd[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:24.253429 systemd-logind[1474]: New session 3 of user core. Dec 16 09:44:24.263475 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 09:44:24.920195 sshd[1827]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:24.924031 systemd-logind[1474]: Session 3 logged out. Waiting for processes to exit. Dec 16 09:44:24.924460 systemd[1]: sshd@2-157.90.156.134:22-147.75.109.163:39580.service: Deactivated successfully. Dec 16 09:44:24.926646 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 09:44:24.927517 systemd-logind[1474]: Removed session 3. Dec 16 09:44:25.091248 systemd[1]: Started sshd@3-157.90.156.134:22-147.75.109.163:39588.service - OpenSSH per-connection server daemon (147.75.109.163:39588). Dec 16 09:44:26.072594 sshd[1850]: Accepted publickey for core from 147.75.109.163 port 39588 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:26.074488 sshd[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:26.079442 systemd-logind[1474]: New session 4 of user core. Dec 16 09:44:26.086504 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 09:44:26.757518 sshd[1850]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:26.762785 systemd-logind[1474]: Session 4 logged out. Waiting for processes to exit. Dec 16 09:44:26.763986 systemd[1]: sshd@3-157.90.156.134:22-147.75.109.163:39588.service: Deactivated successfully. Dec 16 09:44:26.767004 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 09:44:26.768772 systemd-logind[1474]: Removed session 4. Dec 16 09:44:26.929754 systemd[1]: Started sshd@4-157.90.156.134:22-147.75.109.163:36710.service - OpenSSH per-connection server daemon (147.75.109.163:36710). Dec 16 09:44:27.920223 sshd[1857]: Accepted publickey for core from 147.75.109.163 port 36710 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:27.921850 sshd[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:27.927847 systemd-logind[1474]: New session 5 of user core. Dec 16 09:44:27.935497 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 09:44:28.451429 sudo[1860]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 09:44:28.451793 sudo[1860]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:44:28.465236 sudo[1860]: pam_unix(sudo:session): session closed for user root Dec 16 09:44:28.624632 sshd[1857]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:28.629220 systemd[1]: sshd@4-157.90.156.134:22-147.75.109.163:36710.service: Deactivated successfully. Dec 16 09:44:28.631664 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 09:44:28.633932 systemd-logind[1474]: Session 5 logged out. Waiting for processes to exit. Dec 16 09:44:28.635793 systemd-logind[1474]: Removed session 5. Dec 16 09:44:28.804687 systemd[1]: Started sshd@5-157.90.156.134:22-147.75.109.163:36720.service - OpenSSH per-connection server daemon (147.75.109.163:36720). Dec 16 09:44:29.784797 sshd[1865]: Accepted publickey for core from 147.75.109.163 port 36720 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:29.786529 sshd[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:29.791263 systemd-logind[1474]: New session 6 of user core. Dec 16 09:44:29.797510 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 09:44:30.308212 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 09:44:30.308683 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:44:30.312543 sudo[1869]: pam_unix(sudo:session): session closed for user root Dec 16 09:44:30.320481 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 16 09:44:30.320897 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:44:30.334572 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 16 09:44:30.337599 auditctl[1872]: No rules Dec 16 09:44:30.338052 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 09:44:30.338289 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 16 09:44:30.341086 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 16 09:44:30.377864 augenrules[1890]: No rules Dec 16 09:44:30.378878 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 16 09:44:30.379961 sudo[1868]: pam_unix(sudo:session): session closed for user root Dec 16 09:44:30.539410 sshd[1865]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:30.542815 systemd[1]: sshd@5-157.90.156.134:22-147.75.109.163:36720.service: Deactivated successfully. Dec 16 09:44:30.545317 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 09:44:30.547152 systemd-logind[1474]: Session 6 logged out. Waiting for processes to exit. Dec 16 09:44:30.548577 systemd-logind[1474]: Removed session 6. Dec 16 09:44:30.706308 systemd[1]: Started sshd@6-157.90.156.134:22-147.75.109.163:36724.service - OpenSSH per-connection server daemon (147.75.109.163:36724). Dec 16 09:44:31.687023 sshd[1898]: Accepted publickey for core from 147.75.109.163 port 36724 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:31.688745 sshd[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:31.693554 systemd-logind[1474]: New session 7 of user core. Dec 16 09:44:31.703485 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 09:44:32.210795 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 09:44:32.211175 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:44:32.484557 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 09:44:32.487046 (dockerd)[1917]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 09:44:32.735004 dockerd[1917]: time="2024-12-16T09:44:32.734939676Z" level=info msg="Starting up" Dec 16 09:44:32.830127 dockerd[1917]: time="2024-12-16T09:44:32.830023064Z" level=info msg="Loading containers: start." Dec 16 09:44:32.930509 kernel: Initializing XFRM netlink socket Dec 16 09:44:33.003465 systemd-networkd[1396]: docker0: Link UP Dec 16 09:44:33.032013 dockerd[1917]: time="2024-12-16T09:44:33.031959254Z" level=info msg="Loading containers: done." Dec 16 09:44:33.047658 dockerd[1917]: time="2024-12-16T09:44:33.047615017Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 09:44:33.047798 dockerd[1917]: time="2024-12-16T09:44:33.047699774Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 16 09:44:33.047854 dockerd[1917]: time="2024-12-16T09:44:33.047828873Z" level=info msg="Daemon has completed initialization" Dec 16 09:44:33.075786 dockerd[1917]: time="2024-12-16T09:44:33.075650069Z" level=info msg="API listen on /run/docker.sock" Dec 16 09:44:33.075962 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 09:44:34.181759 containerd[1492]: time="2024-12-16T09:44:34.181529793Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 16 09:44:34.201383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 16 09:44:34.210789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:44:34.344181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:44:34.348415 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:44:34.380245 kubelet[2070]: E1216 09:44:34.380165 2070 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:44:34.384144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:44:34.384326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:44:34.787509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619659260.mount: Deactivated successfully. Dec 16 09:44:36.109252 containerd[1492]: time="2024-12-16T09:44:36.109201118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:36.110219 containerd[1492]: time="2024-12-16T09:44:36.110167058Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675734" Dec 16 09:44:36.111144 containerd[1492]: time="2024-12-16T09:44:36.111102821Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:36.113328 containerd[1492]: time="2024-12-16T09:44:36.113292297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:36.114489 containerd[1492]: time="2024-12-16T09:44:36.114258397Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.932686185s" Dec 16 09:44:36.114489 containerd[1492]: time="2024-12-16T09:44:36.114288012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 16 09:44:36.132820 containerd[1492]: time="2024-12-16T09:44:36.132778386Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 16 09:44:37.745613 containerd[1492]: time="2024-12-16T09:44:37.745536484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:37.746604 containerd[1492]: time="2024-12-16T09:44:37.746564288Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606429" Dec 16 09:44:37.747567 containerd[1492]: time="2024-12-16T09:44:37.747532623Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:37.749808 containerd[1492]: time="2024-12-16T09:44:37.749773345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:37.750868 containerd[1492]: time="2024-12-16T09:44:37.750534405Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.617675259s" Dec 16 09:44:37.750868 containerd[1492]: time="2024-12-16T09:44:37.750783166Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 16 09:44:37.772573 containerd[1492]: time="2024-12-16T09:44:37.772526219Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 16 09:44:38.744956 containerd[1492]: time="2024-12-16T09:44:38.744888692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:38.746238 containerd[1492]: time="2024-12-16T09:44:38.746194432Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783055" Dec 16 09:44:38.747170 containerd[1492]: time="2024-12-16T09:44:38.747132271Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:38.749933 containerd[1492]: time="2024-12-16T09:44:38.749893448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:38.751171 containerd[1492]: time="2024-12-16T09:44:38.750961719Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 978.393061ms" Dec 16 09:44:38.751171 containerd[1492]: time="2024-12-16T09:44:38.750987507Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 16 09:44:38.776955 containerd[1492]: time="2024-12-16T09:44:38.776905663Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 16 09:44:39.672916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947988325.mount: Deactivated successfully. Dec 16 09:44:40.007018 containerd[1492]: time="2024-12-16T09:44:40.006940213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:40.007970 containerd[1492]: time="2024-12-16T09:44:40.007872472Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057496" Dec 16 09:44:40.008816 containerd[1492]: time="2024-12-16T09:44:40.008760219Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:40.012033 containerd[1492]: time="2024-12-16T09:44:40.011951650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:40.012826 containerd[1492]: time="2024-12-16T09:44:40.012455834Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.235510208s" Dec 16 09:44:40.012826 containerd[1492]: time="2024-12-16T09:44:40.012487463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 16 09:44:40.034694 containerd[1492]: time="2024-12-16T09:44:40.034639361Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 16 09:44:40.562625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2220003001.mount: Deactivated successfully. Dec 16 09:44:41.204035 containerd[1492]: time="2024-12-16T09:44:41.203943380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:41.204917 containerd[1492]: time="2024-12-16T09:44:41.204877512Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Dec 16 09:44:41.205459 containerd[1492]: time="2024-12-16T09:44:41.205416082Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:41.207847 containerd[1492]: time="2024-12-16T09:44:41.207782493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:41.208807 containerd[1492]: time="2024-12-16T09:44:41.208684145Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.173991914s" Dec 16 09:44:41.208807 containerd[1492]: time="2024-12-16T09:44:41.208716696Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 16 09:44:41.231156 containerd[1492]: time="2024-12-16T09:44:41.231085028Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 16 09:44:41.733960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount393503433.mount: Deactivated successfully. Dec 16 09:44:41.739957 containerd[1492]: time="2024-12-16T09:44:41.739890619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:41.740931 containerd[1492]: time="2024-12-16T09:44:41.740884743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Dec 16 09:44:41.741776 containerd[1492]: time="2024-12-16T09:44:41.741695367Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:41.744038 containerd[1492]: time="2024-12-16T09:44:41.743925175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:41.745282 containerd[1492]: time="2024-12-16T09:44:41.744654137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 513.532953ms" Dec 16 09:44:41.745282 containerd[1492]: time="2024-12-16T09:44:41.744685025Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 16 09:44:41.771363 containerd[1492]: time="2024-12-16T09:44:41.771287002Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 16 09:44:42.316765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1863563077.mount: Deactivated successfully. Dec 16 09:44:43.845128 containerd[1492]: time="2024-12-16T09:44:43.845071158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:43.846109 containerd[1492]: time="2024-12-16T09:44:43.846040998Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Dec 16 09:44:43.847024 containerd[1492]: time="2024-12-16T09:44:43.846984350Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:43.849661 containerd[1492]: time="2024-12-16T09:44:43.849626324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:44:43.851165 containerd[1492]: time="2024-12-16T09:44:43.850569416Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.079238592s" Dec 16 09:44:43.851165 containerd[1492]: time="2024-12-16T09:44:43.850628514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 16 09:44:44.451851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Dec 16 09:44:44.458647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:44:44.634481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:44:44.638669 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:44:44.689043 kubelet[2302]: E1216 09:44:44.688974 2302 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:44:44.692079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:44:44.692556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:44:46.867739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:44:46.873540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:44:46.897411 systemd[1]: Reloading requested from client PID 2357 ('systemctl') (unit session-7.scope)... Dec 16 09:44:46.897573 systemd[1]: Reloading... Dec 16 09:44:47.024373 zram_generator::config[2398]: No configuration found. Dec 16 09:44:47.125317 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:44:47.191311 systemd[1]: Reloading finished in 292 ms. Dec 16 09:44:47.243559 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 09:44:47.243666 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 09:44:47.244006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:44:47.249851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:44:47.382441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:44:47.392622 (kubelet)[2450]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 09:44:47.428645 kubelet[2450]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:44:47.428645 kubelet[2450]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 16 09:44:47.428645 kubelet[2450]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:44:47.430559 kubelet[2450]: I1216 09:44:47.430509 2450 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 09:44:47.863446 kubelet[2450]: I1216 09:44:47.863398 2450 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 16 09:44:47.863446 kubelet[2450]: I1216 09:44:47.863429 2450 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 09:44:47.863700 kubelet[2450]: I1216 09:44:47.863666 2450 server.go:927] "Client rotation is on, will bootstrap in background" Dec 16 09:44:47.890828 kubelet[2450]: I1216 09:44:47.890778 2450 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 09:44:47.891875 kubelet[2450]: E1216 09:44:47.891741 2450 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://157.90.156.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:47.903105 kubelet[2450]: I1216 09:44:47.903065 2450 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 09:44:47.906062 kubelet[2450]: I1216 09:44:47.906010 2450 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 09:44:47.906218 kubelet[2450]: I1216 09:44:47.906047 2450 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-e-12e77f9037","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 16 09:44:47.906218 kubelet[2450]: I1216 09:44:47.906213 2450 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 09:44:47.906218 kubelet[2450]: I1216 09:44:47.906223 2450 container_manager_linux.go:301] "Creating device plugin manager" Dec 16 09:44:47.906419 kubelet[2450]: I1216 09:44:47.906365 2450 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:44:47.907839 kubelet[2450]: W1216 09:44:47.907734 2450 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.90.156.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-12e77f9037&limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:47.907839 kubelet[2450]: E1216 09:44:47.907809 2450 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.90.156.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-12e77f9037&limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:47.908663 kubelet[2450]: I1216 09:44:47.908630 2450 kubelet.go:400] "Attempting to sync node with API server" Dec 16 09:44:47.908663 kubelet[2450]: I1216 09:44:47.908655 2450 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 09:44:47.908758 kubelet[2450]: I1216 09:44:47.908681 2450 kubelet.go:312] "Adding apiserver pod source" Dec 16 09:44:47.908758 kubelet[2450]: I1216 09:44:47.908698 2450 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 09:44:47.913761 kubelet[2450]: W1216 09:44:47.913732 2450 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.90.156.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:47.913925 kubelet[2450]: E1216 09:44:47.913834 2450 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.90.156.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:47.914434 kubelet[2450]: I1216 09:44:47.914190 2450 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 16 09:44:47.916035 kubelet[2450]: I1216 09:44:47.916016 2450 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 09:44:47.916181 kubelet[2450]: W1216 09:44:47.916165 2450 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 09:44:47.917274 kubelet[2450]: I1216 09:44:47.917256 2450 server.go:1264] "Started kubelet" Dec 16 09:44:47.919270 kubelet[2450]: I1216 09:44:47.919239 2450 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 09:44:47.920301 kubelet[2450]: I1216 09:44:47.920260 2450 server.go:455] "Adding debug handlers to kubelet server" Dec 16 09:44:47.922636 kubelet[2450]: I1216 09:44:47.922155 2450 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 09:44:47.924538 kubelet[2450]: I1216 09:44:47.923912 2450 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 09:44:47.924538 kubelet[2450]: I1216 09:44:47.924178 2450 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 09:44:47.924538 kubelet[2450]: E1216 09:44:47.924301 2450 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.90.156.134:6443/api/v1/namespaces/default/events\": dial tcp 157.90.156.134:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-e-12e77f9037.18119f1f363ebaa6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-e-12e77f9037,UID:ci-4081-2-1-e-12e77f9037,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-e-12e77f9037,},FirstTimestamp:2024-12-16 09:44:47.917234854 +0000 UTC m=+0.521462889,LastTimestamp:2024-12-16 09:44:47.917234854 +0000 UTC m=+0.521462889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-e-12e77f9037,}" Dec 16 09:44:47.927634 kubelet[2450]: E1216 09:44:47.926825 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:47.927634 kubelet[2450]: I1216 09:44:47.926879 2450 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 16 09:44:47.927634 kubelet[2450]: I1216 09:44:47.927014 2450 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 16 09:44:47.927634 kubelet[2450]: I1216 09:44:47.927074 2450 reconciler.go:26] "Reconciler: start to sync state" Dec 16 09:44:47.927634 kubelet[2450]: W1216 09:44:47.927376 2450 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.90.156.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:47.927634 kubelet[2450]: E1216 09:44:47.927415 2450 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.90.156.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:47.927634 kubelet[2450]: E1216 09:44:47.927624 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.156.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-12e77f9037?timeout=10s\": dial tcp 157.90.156.134:6443: connect: connection refused" interval="200ms" Dec 16 09:44:47.930329 kubelet[2450]: I1216 09:44:47.930309 2450 factory.go:221] Registration of the systemd container factory successfully Dec 16 09:44:47.930898 kubelet[2450]: I1216 09:44:47.930868 2450 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 09:44:47.932856 kubelet[2450]: I1216 09:44:47.932837 2450 factory.go:221] Registration of the containerd container factory successfully Dec 16 09:44:47.951048 kubelet[2450]: I1216 09:44:47.951010 2450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 09:44:47.952903 kubelet[2450]: I1216 09:44:47.952888 2450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 09:44:47.953020 kubelet[2450]: I1216 09:44:47.953009 2450 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 16 09:44:47.953090 kubelet[2450]: I1216 09:44:47.953080 2450 kubelet.go:2337] "Starting kubelet main sync loop" Dec 16 09:44:47.953179 kubelet[2450]: E1216 09:44:47.953163 2450 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 09:44:47.960520 kubelet[2450]: W1216 09:44:47.960460 2450 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.90.156.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:47.960609 kubelet[2450]: E1216 09:44:47.960596 2450 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.90.156.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:47.969021 kubelet[2450]: I1216 09:44:47.968986 2450 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 16 09:44:47.969021 kubelet[2450]: I1216 09:44:47.969007 2450 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 16 09:44:47.969021 kubelet[2450]: I1216 09:44:47.969023 2450 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:44:47.970404 kubelet[2450]: I1216 09:44:47.970379 2450 policy_none.go:49] "None policy: Start" Dec 16 09:44:47.970929 kubelet[2450]: I1216 09:44:47.970809 2450 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 16 09:44:47.970986 kubelet[2450]: I1216 09:44:47.970956 2450 state_mem.go:35] "Initializing new in-memory state store" Dec 16 09:44:47.977167 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 09:44:47.985210 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 09:44:47.988105 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 09:44:47.998489 kubelet[2450]: I1216 09:44:47.998455 2450 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 09:44:47.998659 kubelet[2450]: I1216 09:44:47.998621 2450 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 09:44:47.999027 kubelet[2450]: I1216 09:44:47.998727 2450 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 09:44:48.000632 kubelet[2450]: E1216 09:44:48.000592 2450 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:48.033424 kubelet[2450]: I1216 09:44:48.033380 2450 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.034332 kubelet[2450]: E1216 09:44:48.034285 2450 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.156.134:6443/api/v1/nodes\": dial tcp 157.90.156.134:6443: connect: connection refused" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.054080 kubelet[2450]: I1216 09:44:48.053784 2450 topology_manager.go:215] "Topology Admit Handler" podUID="de4eebf4cb60eec6907471289a4054f1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.055977 kubelet[2450]: I1216 09:44:48.055957 2450 topology_manager.go:215] "Topology Admit Handler" podUID="04084f48fdae0c0afa8346e834781a32" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.058375 kubelet[2450]: I1216 09:44:48.058183 2450 topology_manager.go:215] "Topology Admit Handler" podUID="1ff1e5ebeb2f1a891a31198978ed06c5" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.065417 systemd[1]: Created slice kubepods-burstable-podde4eebf4cb60eec6907471289a4054f1.slice - libcontainer container kubepods-burstable-podde4eebf4cb60eec6907471289a4054f1.slice. Dec 16 09:44:48.078102 systemd[1]: Created slice kubepods-burstable-pod04084f48fdae0c0afa8346e834781a32.slice - libcontainer container kubepods-burstable-pod04084f48fdae0c0afa8346e834781a32.slice. Dec 16 09:44:48.091199 systemd[1]: Created slice kubepods-burstable-pod1ff1e5ebeb2f1a891a31198978ed06c5.slice - libcontainer container kubepods-burstable-pod1ff1e5ebeb2f1a891a31198978ed06c5.slice. Dec 16 09:44:48.129305 kubelet[2450]: I1216 09:44:48.128967 2450 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de4eebf4cb60eec6907471289a4054f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" (UID: \"de4eebf4cb60eec6907471289a4054f1\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.129305 kubelet[2450]: E1216 09:44:48.129018 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.156.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-12e77f9037?timeout=10s\": dial tcp 157.90.156.134:6443: connect: connection refused" interval="400ms" Dec 16 09:44:48.129305 kubelet[2450]: I1216 09:44:48.129037 2450 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04084f48fdae0c0afa8346e834781a32-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-e-12e77f9037\" (UID: \"04084f48fdae0c0afa8346e834781a32\") " pod="kube-system/kube-scheduler-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.129305 kubelet[2450]: I1216 09:44:48.129077 2450 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ff1e5ebeb2f1a891a31198978ed06c5-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-e-12e77f9037\" (UID: \"1ff1e5ebeb2f1a891a31198978ed06c5\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.129305 kubelet[2450]: I1216 09:44:48.129100 2450 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ff1e5ebeb2f1a891a31198978ed06c5-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-e-12e77f9037\" (UID: \"1ff1e5ebeb2f1a891a31198978ed06c5\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.129587 kubelet[2450]: I1216 09:44:48.129137 2450 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ff1e5ebeb2f1a891a31198978ed06c5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-e-12e77f9037\" (UID: \"1ff1e5ebeb2f1a891a31198978ed06c5\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.129587 kubelet[2450]: I1216 09:44:48.129167 2450 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de4eebf4cb60eec6907471289a4054f1-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" (UID: \"de4eebf4cb60eec6907471289a4054f1\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.129587 kubelet[2450]: I1216 09:44:48.129189 2450 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/de4eebf4cb60eec6907471289a4054f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" (UID: \"de4eebf4cb60eec6907471289a4054f1\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.129587 kubelet[2450]: I1216 09:44:48.129208 2450 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de4eebf4cb60eec6907471289a4054f1-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" (UID: \"de4eebf4cb60eec6907471289a4054f1\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.129587 kubelet[2450]: I1216 09:44:48.129227 2450 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de4eebf4cb60eec6907471289a4054f1-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" (UID: \"de4eebf4cb60eec6907471289a4054f1\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.237513 kubelet[2450]: I1216 09:44:48.237224 2450 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.237742 kubelet[2450]: E1216 09:44:48.237672 2450 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.156.134:6443/api/v1/nodes\": dial tcp 157.90.156.134:6443: connect: connection refused" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.377523 containerd[1492]: time="2024-12-16T09:44:48.377455798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-e-12e77f9037,Uid:de4eebf4cb60eec6907471289a4054f1,Namespace:kube-system,Attempt:0,}" Dec 16 09:44:48.390548 containerd[1492]: time="2024-12-16T09:44:48.390336479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-e-12e77f9037,Uid:04084f48fdae0c0afa8346e834781a32,Namespace:kube-system,Attempt:0,}" Dec 16 09:44:48.395070 containerd[1492]: time="2024-12-16T09:44:48.395025034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-e-12e77f9037,Uid:1ff1e5ebeb2f1a891a31198978ed06c5,Namespace:kube-system,Attempt:0,}" Dec 16 09:44:48.529848 kubelet[2450]: E1216 09:44:48.529772 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.156.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-12e77f9037?timeout=10s\": dial tcp 157.90.156.134:6443: connect: connection refused" interval="800ms" Dec 16 09:44:48.640383 kubelet[2450]: I1216 09:44:48.640330 2450 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.640872 kubelet[2450]: E1216 09:44:48.640759 2450 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.156.134:6443/api/v1/nodes\": dial tcp 157.90.156.134:6443: connect: connection refused" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:48.751175 kubelet[2450]: W1216 09:44:48.751092 2450 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.90.156.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:48.751175 kubelet[2450]: E1216 09:44:48.751157 2450 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.90.156.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:48.879601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468363110.mount: Deactivated successfully. Dec 16 09:44:48.888095 containerd[1492]: time="2024-12-16T09:44:48.888019732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:44:48.889657 containerd[1492]: time="2024-12-16T09:44:48.889603156Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:44:48.890471 containerd[1492]: time="2024-12-16T09:44:48.890392503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 16 09:44:48.891653 containerd[1492]: time="2024-12-16T09:44:48.891558860Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:44:48.892756 containerd[1492]: time="2024-12-16T09:44:48.892696765Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 16 09:44:48.893796 containerd[1492]: time="2024-12-16T09:44:48.893759160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:44:48.895773 containerd[1492]: time="2024-12-16T09:44:48.895656596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 16 09:44:48.898959 containerd[1492]: time="2024-12-16T09:44:48.898903231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:44:48.899631 containerd[1492]: time="2024-12-16T09:44:48.899589937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 522.015378ms" Dec 16 09:44:48.903979 containerd[1492]: time="2024-12-16T09:44:48.903226516Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 508.133628ms" Dec 16 09:44:48.906648 containerd[1492]: time="2024-12-16T09:44:48.906425593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 515.982826ms" Dec 16 09:44:48.927777 systemd[1]: Started sshd@7-157.90.156.134:22-92.255.85.188:46406.service - OpenSSH per-connection server daemon (92.255.85.188:46406). Dec 16 09:44:49.051960 containerd[1492]: time="2024-12-16T09:44:49.050857105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:44:49.051960 containerd[1492]: time="2024-12-16T09:44:49.050968191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:44:49.051960 containerd[1492]: time="2024-12-16T09:44:49.050984411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:44:49.051960 containerd[1492]: time="2024-12-16T09:44:49.051138567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:44:49.051960 containerd[1492]: time="2024-12-16T09:44:49.050683922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:44:49.051960 containerd[1492]: time="2024-12-16T09:44:49.050776234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:44:49.051960 containerd[1492]: time="2024-12-16T09:44:49.050798074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:44:49.051960 containerd[1492]: time="2024-12-16T09:44:49.050881439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:44:49.054136 containerd[1492]: time="2024-12-16T09:44:49.053867881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:44:49.054136 containerd[1492]: time="2024-12-16T09:44:49.053924336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:44:49.054136 containerd[1492]: time="2024-12-16T09:44:49.053958560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:44:49.054136 containerd[1492]: time="2024-12-16T09:44:49.054046854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:44:49.079504 systemd[1]: Started cri-containerd-d60e0bcab9deeade4bc561642d975f1fed211d386cccd3b5437698ffc1ed9310.scope - libcontainer container d60e0bcab9deeade4bc561642d975f1fed211d386cccd3b5437698ffc1ed9310. Dec 16 09:44:49.086469 systemd[1]: Started cri-containerd-917cb6b33997105a763bea5c0e3451d7f04ce1fd155837e2dd66e775dbc3c22b.scope - libcontainer container 917cb6b33997105a763bea5c0e3451d7f04ce1fd155837e2dd66e775dbc3c22b. Dec 16 09:44:49.089273 systemd[1]: Started cri-containerd-ac4b2f92bb8dcd9ffce04c25cb938dc085f79fb7e10fee39321c469c95b87d28.scope - libcontainer container ac4b2f92bb8dcd9ffce04c25cb938dc085f79fb7e10fee39321c469c95b87d28. Dec 16 09:44:49.133381 containerd[1492]: time="2024-12-16T09:44:49.133268364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-e-12e77f9037,Uid:de4eebf4cb60eec6907471289a4054f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d60e0bcab9deeade4bc561642d975f1fed211d386cccd3b5437698ffc1ed9310\"" Dec 16 09:44:49.151414 containerd[1492]: time="2024-12-16T09:44:49.151232210Z" level=info msg="CreateContainer within sandbox \"d60e0bcab9deeade4bc561642d975f1fed211d386cccd3b5437698ffc1ed9310\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 09:44:49.173603 containerd[1492]: time="2024-12-16T09:44:49.173433255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-e-12e77f9037,Uid:1ff1e5ebeb2f1a891a31198978ed06c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"917cb6b33997105a763bea5c0e3451d7f04ce1fd155837e2dd66e775dbc3c22b\"" Dec 16 09:44:49.178228 containerd[1492]: time="2024-12-16T09:44:49.178177524Z" level=info msg="CreateContainer within sandbox \"917cb6b33997105a763bea5c0e3451d7f04ce1fd155837e2dd66e775dbc3c22b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 09:44:49.182440 containerd[1492]: time="2024-12-16T09:44:49.182314154Z" level=info msg="CreateContainer within sandbox \"d60e0bcab9deeade4bc561642d975f1fed211d386cccd3b5437698ffc1ed9310\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"91c009d5637112152d980d757c5bc8c48c2b72228c479fc94e040066732d6c8a\"" Dec 16 09:44:49.182809 containerd[1492]: time="2024-12-16T09:44:49.182751538Z" level=info msg="StartContainer for \"91c009d5637112152d980d757c5bc8c48c2b72228c479fc94e040066732d6c8a\"" Dec 16 09:44:49.186209 containerd[1492]: time="2024-12-16T09:44:49.186115832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-e-12e77f9037,Uid:04084f48fdae0c0afa8346e834781a32,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac4b2f92bb8dcd9ffce04c25cb938dc085f79fb7e10fee39321c469c95b87d28\"" Dec 16 09:44:49.189586 containerd[1492]: time="2024-12-16T09:44:49.189466371Z" level=info msg="CreateContainer within sandbox \"ac4b2f92bb8dcd9ffce04c25cb938dc085f79fb7e10fee39321c469c95b87d28\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 09:44:49.194454 kubelet[2450]: W1216 09:44:49.194314 2450 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.90.156.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-12e77f9037&limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:49.194454 kubelet[2450]: E1216 09:44:49.194425 2450 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.90.156.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-12e77f9037&limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:49.195201 containerd[1492]: time="2024-12-16T09:44:49.195172499Z" level=info msg="CreateContainer within sandbox \"917cb6b33997105a763bea5c0e3451d7f04ce1fd155837e2dd66e775dbc3c22b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d3b35a71aca14d95bc9823f1407891635b4de80a61e2216960d6dc5db3b55ae7\"" Dec 16 09:44:49.196217 containerd[1492]: time="2024-12-16T09:44:49.196142433Z" level=info msg="StartContainer for \"d3b35a71aca14d95bc9823f1407891635b4de80a61e2216960d6dc5db3b55ae7\"" Dec 16 09:44:49.207263 containerd[1492]: time="2024-12-16T09:44:49.207232199Z" level=info msg="CreateContainer within sandbox \"ac4b2f92bb8dcd9ffce04c25cb938dc085f79fb7e10fee39321c469c95b87d28\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6d1f8438d6abf6a12a6d23d2b4255269594f95e164d18e4aa790c22166ccd919\"" Dec 16 09:44:49.208774 containerd[1492]: time="2024-12-16T09:44:49.207883890Z" level=info msg="StartContainer for \"6d1f8438d6abf6a12a6d23d2b4255269594f95e164d18e4aa790c22166ccd919\"" Dec 16 09:44:49.221631 systemd[1]: Started cri-containerd-91c009d5637112152d980d757c5bc8c48c2b72228c479fc94e040066732d6c8a.scope - libcontainer container 91c009d5637112152d980d757c5bc8c48c2b72228c479fc94e040066732d6c8a. Dec 16 09:44:49.236861 systemd[1]: Started cri-containerd-d3b35a71aca14d95bc9823f1407891635b4de80a61e2216960d6dc5db3b55ae7.scope - libcontainer container d3b35a71aca14d95bc9823f1407891635b4de80a61e2216960d6dc5db3b55ae7. Dec 16 09:44:49.249795 systemd[1]: Started cri-containerd-6d1f8438d6abf6a12a6d23d2b4255269594f95e164d18e4aa790c22166ccd919.scope - libcontainer container 6d1f8438d6abf6a12a6d23d2b4255269594f95e164d18e4aa790c22166ccd919. Dec 16 09:44:49.292616 kubelet[2450]: W1216 09:44:49.292558 2450 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.90.156.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:49.292865 kubelet[2450]: E1216 09:44:49.292845 2450 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.90.156.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:49.303303 containerd[1492]: time="2024-12-16T09:44:49.303255926Z" level=info msg="StartContainer for \"d3b35a71aca14d95bc9823f1407891635b4de80a61e2216960d6dc5db3b55ae7\" returns successfully" Dec 16 09:44:49.306210 containerd[1492]: time="2024-12-16T09:44:49.306136251Z" level=info msg="StartContainer for \"91c009d5637112152d980d757c5bc8c48c2b72228c479fc94e040066732d6c8a\" returns successfully" Dec 16 09:44:49.323328 containerd[1492]: time="2024-12-16T09:44:49.323212767Z" level=info msg="StartContainer for \"6d1f8438d6abf6a12a6d23d2b4255269594f95e164d18e4aa790c22166ccd919\" returns successfully" Dec 16 09:44:49.331292 kubelet[2450]: E1216 09:44:49.331222 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.156.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-12e77f9037?timeout=10s\": dial tcp 157.90.156.134:6443: connect: connection refused" interval="1.6s" Dec 16 09:44:49.343690 sshd[2485]: Invalid user nutanix from 92.255.85.188 port 46406 Dec 16 09:44:49.354522 kubelet[2450]: W1216 09:44:49.354438 2450 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.90.156.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:49.354522 kubelet[2450]: E1216 09:44:49.354502 2450 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.90.156.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.156.134:6443: connect: connection refused Dec 16 09:44:49.416423 sshd[2485]: Connection closed by invalid user nutanix 92.255.85.188 port 46406 [preauth] Dec 16 09:44:49.418160 systemd[1]: sshd@7-157.90.156.134:22-92.255.85.188:46406.service: Deactivated successfully. Dec 16 09:44:49.444027 kubelet[2450]: I1216 09:44:49.443980 2450 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:49.444384 kubelet[2450]: E1216 09:44:49.444301 2450 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.156.134:6443/api/v1/nodes\": dial tcp 157.90.156.134:6443: connect: connection refused" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:50.959010 kubelet[2450]: E1216 09:44:50.958126 2450 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-e-12e77f9037\" not found" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:51.047482 kubelet[2450]: I1216 09:44:51.047428 2450 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:51.058296 kubelet[2450]: I1216 09:44:51.058258 2450 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:51.064101 kubelet[2450]: E1216 09:44:51.064069 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:51.165264 kubelet[2450]: E1216 09:44:51.165201 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:51.266552 kubelet[2450]: E1216 09:44:51.266327 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:51.367599 kubelet[2450]: E1216 09:44:51.367515 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:51.468112 kubelet[2450]: E1216 09:44:51.468026 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:51.569005 kubelet[2450]: E1216 09:44:51.568833 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:51.669850 kubelet[2450]: E1216 09:44:51.669711 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:51.770317 kubelet[2450]: E1216 09:44:51.770274 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:51.871821 kubelet[2450]: E1216 09:44:51.871697 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:51.972695 kubelet[2450]: E1216 09:44:51.972538 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:52.073468 kubelet[2450]: E1216 09:44:52.073394 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:52.174370 kubelet[2450]: E1216 09:44:52.174221 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:52.275372 kubelet[2450]: E1216 09:44:52.275260 2450 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-12e77f9037\" not found" Dec 16 09:44:52.829867 systemd[1]: Reloading requested from client PID 2728 ('systemctl') (unit session-7.scope)... Dec 16 09:44:52.829889 systemd[1]: Reloading... Dec 16 09:44:52.915808 kubelet[2450]: I1216 09:44:52.913297 2450 apiserver.go:52] "Watching apiserver" Dec 16 09:44:52.928016 kubelet[2450]: I1216 09:44:52.927976 2450 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 16 09:44:52.938388 zram_generator::config[2777]: No configuration found. Dec 16 09:44:53.018080 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:44:53.094534 systemd[1]: Reloading finished in 264 ms. Dec 16 09:44:53.134757 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:44:53.147544 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 09:44:53.147791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:44:53.152819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:44:53.280506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:44:53.281841 (kubelet)[2819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 09:44:53.332000 kubelet[2819]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:44:53.332000 kubelet[2819]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 16 09:44:53.332000 kubelet[2819]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:44:53.333721 kubelet[2819]: I1216 09:44:53.332793 2819 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 09:44:53.338027 kubelet[2819]: I1216 09:44:53.337998 2819 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 16 09:44:53.338027 kubelet[2819]: I1216 09:44:53.338021 2819 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 09:44:53.338185 kubelet[2819]: I1216 09:44:53.338160 2819 server.go:927] "Client rotation is on, will bootstrap in background" Dec 16 09:44:53.339236 kubelet[2819]: I1216 09:44:53.339209 2819 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 09:44:53.340497 kubelet[2819]: I1216 09:44:53.340334 2819 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 09:44:53.347051 kubelet[2819]: I1216 09:44:53.346548 2819 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 09:44:53.347051 kubelet[2819]: I1216 09:44:53.346741 2819 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 09:44:53.347051 kubelet[2819]: I1216 09:44:53.346761 2819 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-e-12e77f9037","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 16 09:44:53.347051 kubelet[2819]: I1216 09:44:53.346890 2819 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 09:44:53.347260 kubelet[2819]: I1216 09:44:53.346898 2819 container_manager_linux.go:301] "Creating device plugin manager" Dec 16 09:44:53.348894 kubelet[2819]: I1216 09:44:53.348869 2819 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:44:53.350283 kubelet[2819]: I1216 09:44:53.348989 2819 kubelet.go:400] "Attempting to sync node with API server" Dec 16 09:44:53.350283 kubelet[2819]: I1216 09:44:53.349034 2819 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 09:44:53.350283 kubelet[2819]: I1216 09:44:53.349054 2819 kubelet.go:312] "Adding apiserver pod source" Dec 16 09:44:53.350283 kubelet[2819]: I1216 09:44:53.349071 2819 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 09:44:53.353950 kubelet[2819]: I1216 09:44:53.353305 2819 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 16 09:44:53.353950 kubelet[2819]: I1216 09:44:53.353475 2819 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 09:44:53.356325 kubelet[2819]: I1216 09:44:53.355308 2819 server.go:1264] "Started kubelet" Dec 16 09:44:53.357657 kubelet[2819]: I1216 09:44:53.357645 2819 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 09:44:53.366247 kubelet[2819]: I1216 09:44:53.366217 2819 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 09:44:53.366645 kubelet[2819]: I1216 09:44:53.366603 2819 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 09:44:53.366898 kubelet[2819]: I1216 09:44:53.366885 2819 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 09:44:53.370358 kubelet[2819]: I1216 09:44:53.370333 2819 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 16 09:44:53.374499 kubelet[2819]: I1216 09:44:53.374485 2819 server.go:455] "Adding debug handlers to kubelet server" Dec 16 09:44:53.376931 kubelet[2819]: I1216 09:44:53.376906 2819 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 16 09:44:53.377382 kubelet[2819]: I1216 09:44:53.377336 2819 reconciler.go:26] "Reconciler: start to sync state" Dec 16 09:44:53.377529 kubelet[2819]: I1216 09:44:53.377426 2819 factory.go:221] Registration of the systemd container factory successfully Dec 16 09:44:53.377876 kubelet[2819]: I1216 09:44:53.377847 2819 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 09:44:53.380819 kubelet[2819]: I1216 09:44:53.380773 2819 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 09:44:53.382581 kubelet[2819]: I1216 09:44:53.382566 2819 factory.go:221] Registration of the containerd container factory successfully Dec 16 09:44:53.383896 kubelet[2819]: I1216 09:44:53.383870 2819 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 09:44:53.383952 kubelet[2819]: I1216 09:44:53.383906 2819 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 16 09:44:53.383952 kubelet[2819]: I1216 09:44:53.383933 2819 kubelet.go:2337] "Starting kubelet main sync loop" Dec 16 09:44:53.384002 kubelet[2819]: E1216 09:44:53.383969 2819 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 09:44:53.387179 kubelet[2819]: E1216 09:44:53.387164 2819 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 09:44:53.455548 kubelet[2819]: I1216 09:44:53.455527 2819 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 16 09:44:53.455902 kubelet[2819]: I1216 09:44:53.455688 2819 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 16 09:44:53.455902 kubelet[2819]: I1216 09:44:53.455713 2819 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:44:53.455902 kubelet[2819]: I1216 09:44:53.455838 2819 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 09:44:53.455902 kubelet[2819]: I1216 09:44:53.455847 2819 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 09:44:53.455902 kubelet[2819]: I1216 09:44:53.455864 2819 policy_none.go:49] "None policy: Start" Dec 16 09:44:53.456501 kubelet[2819]: I1216 09:44:53.456488 2819 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 16 09:44:53.456642 kubelet[2819]: I1216 09:44:53.456632 2819 state_mem.go:35] "Initializing new in-memory state store" Dec 16 09:44:53.457378 kubelet[2819]: I1216 09:44:53.456774 2819 state_mem.go:75] "Updated machine memory state" Dec 16 09:44:53.463042 kubelet[2819]: I1216 09:44:53.462906 2819 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 09:44:53.464701 kubelet[2819]: I1216 09:44:53.464648 2819 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 09:44:53.464818 kubelet[2819]: I1216 09:44:53.464800 2819 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 09:44:53.475733 kubelet[2819]: I1216 09:44:53.475712 2819 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.485037 kubelet[2819]: I1216 09:44:53.484977 2819 topology_manager.go:215] "Topology Admit Handler" podUID="1ff1e5ebeb2f1a891a31198978ed06c5" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.485216 kubelet[2819]: I1216 09:44:53.485201 2819 topology_manager.go:215] "Topology Admit Handler" podUID="de4eebf4cb60eec6907471289a4054f1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.486268 kubelet[2819]: I1216 09:44:53.485427 2819 topology_manager.go:215] "Topology Admit Handler" podUID="04084f48fdae0c0afa8346e834781a32" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.489694 kubelet[2819]: I1216 09:44:53.489677 2819 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.489939 kubelet[2819]: I1216 09:44:53.489908 2819 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.499498 kubelet[2819]: E1216 09:44:53.499221 2819 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" already exists" pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.678500 kubelet[2819]: I1216 09:44:53.678315 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ff1e5ebeb2f1a891a31198978ed06c5-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-e-12e77f9037\" (UID: \"1ff1e5ebeb2f1a891a31198978ed06c5\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.678500 kubelet[2819]: I1216 09:44:53.678399 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ff1e5ebeb2f1a891a31198978ed06c5-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-e-12e77f9037\" (UID: \"1ff1e5ebeb2f1a891a31198978ed06c5\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.678500 kubelet[2819]: I1216 09:44:53.678432 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ff1e5ebeb2f1a891a31198978ed06c5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-e-12e77f9037\" (UID: \"1ff1e5ebeb2f1a891a31198978ed06c5\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.678500 kubelet[2819]: I1216 09:44:53.678460 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de4eebf4cb60eec6907471289a4054f1-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" (UID: \"de4eebf4cb60eec6907471289a4054f1\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.678500 kubelet[2819]: I1216 09:44:53.678486 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/de4eebf4cb60eec6907471289a4054f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" (UID: \"de4eebf4cb60eec6907471289a4054f1\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.678743 kubelet[2819]: I1216 09:44:53.678509 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de4eebf4cb60eec6907471289a4054f1-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" (UID: \"de4eebf4cb60eec6907471289a4054f1\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.678743 kubelet[2819]: I1216 09:44:53.678535 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de4eebf4cb60eec6907471289a4054f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" (UID: \"de4eebf4cb60eec6907471289a4054f1\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.678743 kubelet[2819]: I1216 09:44:53.678564 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de4eebf4cb60eec6907471289a4054f1-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" (UID: \"de4eebf4cb60eec6907471289a4054f1\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:53.678743 kubelet[2819]: I1216 09:44:53.678591 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04084f48fdae0c0afa8346e834781a32-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-e-12e77f9037\" (UID: \"04084f48fdae0c0afa8346e834781a32\") " pod="kube-system/kube-scheduler-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:54.353746 kubelet[2819]: I1216 09:44:54.353692 2819 apiserver.go:52] "Watching apiserver" Dec 16 09:44:54.378162 kubelet[2819]: I1216 09:44:54.378118 2819 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 16 09:44:54.436261 kubelet[2819]: E1216 09:44:54.436011 2819 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-2-1-e-12e77f9037\" already exists" pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:54.437690 kubelet[2819]: E1216 09:44:54.437642 2819 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-e-12e77f9037\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-e-12e77f9037" Dec 16 09:44:54.478880 kubelet[2819]: I1216 09:44:54.478580 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-e-12e77f9037" podStartSLOduration=2.478563031 podStartE2EDuration="2.478563031s" podCreationTimestamp="2024-12-16 09:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:44:54.459312157 +0000 UTC m=+1.172616640" watchObservedRunningTime="2024-12-16 09:44:54.478563031 +0000 UTC m=+1.191867505" Dec 16 09:44:54.491316 kubelet[2819]: I1216 09:44:54.491263 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-e-12e77f9037" podStartSLOduration=1.491246616 podStartE2EDuration="1.491246616s" podCreationTimestamp="2024-12-16 09:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:44:54.478766099 +0000 UTC m=+1.192070582" watchObservedRunningTime="2024-12-16 09:44:54.491246616 +0000 UTC m=+1.204551089" Dec 16 09:44:54.506808 kubelet[2819]: I1216 09:44:54.506675 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-e-12e77f9037" podStartSLOduration=1.506661091 podStartE2EDuration="1.506661091s" podCreationTimestamp="2024-12-16 09:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:44:54.491667539 +0000 UTC m=+1.204972012" watchObservedRunningTime="2024-12-16 09:44:54.506661091 +0000 UTC m=+1.219965564" Dec 16 09:44:58.516757 sudo[1901]: pam_unix(sudo:session): session closed for user root Dec 16 09:44:58.677636 sshd[1898]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:58.681579 systemd[1]: sshd@6-157.90.156.134:22-147.75.109.163:36724.service: Deactivated successfully. Dec 16 09:44:58.683639 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 09:44:58.684168 systemd[1]: session-7.scope: Consumed 4.586s CPU time, 189.4M memory peak, 0B memory swap peak. Dec 16 09:44:58.686296 systemd-logind[1474]: Session 7 logged out. Waiting for processes to exit. Dec 16 09:44:58.688048 systemd-logind[1474]: Removed session 7. Dec 16 09:45:06.764409 kubelet[2819]: I1216 09:45:06.764330 2819 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 09:45:06.765006 containerd[1492]: time="2024-12-16T09:45:06.764795353Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 09:45:06.765278 kubelet[2819]: I1216 09:45:06.765100 2819 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 09:45:07.410124 kubelet[2819]: I1216 09:45:07.410071 2819 topology_manager.go:215] "Topology Admit Handler" podUID="ddf944c9-4a84-4868-b982-733dec9f46ad" podNamespace="kube-system" podName="kube-proxy-vrgfp" Dec 16 09:45:07.424376 systemd[1]: Created slice kubepods-besteffort-podddf944c9_4a84_4868_b982_733dec9f46ad.slice - libcontainer container kubepods-besteffort-podddf944c9_4a84_4868_b982_733dec9f46ad.slice. Dec 16 09:45:07.564537 kubelet[2819]: I1216 09:45:07.564477 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhm2j\" (UniqueName: \"kubernetes.io/projected/ddf944c9-4a84-4868-b982-733dec9f46ad-kube-api-access-rhm2j\") pod \"kube-proxy-vrgfp\" (UID: \"ddf944c9-4a84-4868-b982-733dec9f46ad\") " pod="kube-system/kube-proxy-vrgfp" Dec 16 09:45:07.564537 kubelet[2819]: I1216 09:45:07.564525 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddf944c9-4a84-4868-b982-733dec9f46ad-kube-proxy\") pod \"kube-proxy-vrgfp\" (UID: \"ddf944c9-4a84-4868-b982-733dec9f46ad\") " pod="kube-system/kube-proxy-vrgfp" Dec 16 09:45:07.564537 kubelet[2819]: I1216 09:45:07.564545 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddf944c9-4a84-4868-b982-733dec9f46ad-xtables-lock\") pod \"kube-proxy-vrgfp\" (UID: \"ddf944c9-4a84-4868-b982-733dec9f46ad\") " pod="kube-system/kube-proxy-vrgfp" Dec 16 09:45:07.564725 kubelet[2819]: I1216 09:45:07.564561 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddf944c9-4a84-4868-b982-733dec9f46ad-lib-modules\") pod \"kube-proxy-vrgfp\" (UID: \"ddf944c9-4a84-4868-b982-733dec9f46ad\") " pod="kube-system/kube-proxy-vrgfp" Dec 16 09:45:07.670778 kubelet[2819]: E1216 09:45:07.670673 2819 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 09:45:07.670778 kubelet[2819]: E1216 09:45:07.670705 2819 projected.go:200] Error preparing data for projected volume kube-api-access-rhm2j for pod kube-system/kube-proxy-vrgfp: configmap "kube-root-ca.crt" not found Dec 16 09:45:07.670778 kubelet[2819]: E1216 09:45:07.670756 2819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ddf944c9-4a84-4868-b982-733dec9f46ad-kube-api-access-rhm2j podName:ddf944c9-4a84-4868-b982-733dec9f46ad nodeName:}" failed. No retries permitted until 2024-12-16 09:45:08.170739375 +0000 UTC m=+14.884043849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rhm2j" (UniqueName: "kubernetes.io/projected/ddf944c9-4a84-4868-b982-733dec9f46ad-kube-api-access-rhm2j") pod "kube-proxy-vrgfp" (UID: "ddf944c9-4a84-4868-b982-733dec9f46ad") : configmap "kube-root-ca.crt" not found Dec 16 09:45:07.873436 kubelet[2819]: I1216 09:45:07.872972 2819 topology_manager.go:215] "Topology Admit Handler" podUID="76fa4222-2ade-4e40-b52b-edd5199a31e9" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-n5gzt" Dec 16 09:45:07.879426 systemd[1]: Created slice kubepods-besteffort-pod76fa4222_2ade_4e40_b52b_edd5199a31e9.slice - libcontainer container kubepods-besteffort-pod76fa4222_2ade_4e40_b52b_edd5199a31e9.slice. Dec 16 09:45:08.069255 kubelet[2819]: I1216 09:45:08.069201 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/76fa4222-2ade-4e40-b52b-edd5199a31e9-var-lib-calico\") pod \"tigera-operator-7bc55997bb-n5gzt\" (UID: \"76fa4222-2ade-4e40-b52b-edd5199a31e9\") " pod="tigera-operator/tigera-operator-7bc55997bb-n5gzt" Dec 16 09:45:08.069427 kubelet[2819]: I1216 09:45:08.069280 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx8wn\" (UniqueName: \"kubernetes.io/projected/76fa4222-2ade-4e40-b52b-edd5199a31e9-kube-api-access-sx8wn\") pod \"tigera-operator-7bc55997bb-n5gzt\" (UID: \"76fa4222-2ade-4e40-b52b-edd5199a31e9\") " pod="tigera-operator/tigera-operator-7bc55997bb-n5gzt" Dec 16 09:45:08.184471 containerd[1492]: time="2024-12-16T09:45:08.184175981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-n5gzt,Uid:76fa4222-2ade-4e40-b52b-edd5199a31e9,Namespace:tigera-operator,Attempt:0,}" Dec 16 09:45:08.224705 containerd[1492]: time="2024-12-16T09:45:08.224564611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:45:08.224705 containerd[1492]: time="2024-12-16T09:45:08.224648938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:45:08.224705 containerd[1492]: time="2024-12-16T09:45:08.224665459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:08.225311 containerd[1492]: time="2024-12-16T09:45:08.225158409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:08.256504 systemd[1]: Started cri-containerd-195dc45b2f8cbbc8bcefad466381f336d2a052faa666f4c8854ec0b0e094fd0b.scope - libcontainer container 195dc45b2f8cbbc8bcefad466381f336d2a052faa666f4c8854ec0b0e094fd0b. Dec 16 09:45:08.298979 containerd[1492]: time="2024-12-16T09:45:08.298942547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-n5gzt,Uid:76fa4222-2ade-4e40-b52b-edd5199a31e9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"195dc45b2f8cbbc8bcefad466381f336d2a052faa666f4c8854ec0b0e094fd0b\"" Dec 16 09:45:08.302323 containerd[1492]: time="2024-12-16T09:45:08.302083395Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 16 09:45:08.331661 containerd[1492]: time="2024-12-16T09:45:08.331524308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vrgfp,Uid:ddf944c9-4a84-4868-b982-733dec9f46ad,Namespace:kube-system,Attempt:0,}" Dec 16 09:45:08.352841 containerd[1492]: time="2024-12-16T09:45:08.352634627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:45:08.352841 containerd[1492]: time="2024-12-16T09:45:08.352696922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:45:08.352841 containerd[1492]: time="2024-12-16T09:45:08.352710368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:08.352841 containerd[1492]: time="2024-12-16T09:45:08.352788795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:08.371495 systemd[1]: Started cri-containerd-9359c632ab98f7bc490691ddbd2b9e500d4b3e5dd810dbd53fe7ccf633aa4e6c.scope - libcontainer container 9359c632ab98f7bc490691ddbd2b9e500d4b3e5dd810dbd53fe7ccf633aa4e6c. Dec 16 09:45:08.391463 containerd[1492]: time="2024-12-16T09:45:08.391401843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vrgfp,Uid:ddf944c9-4a84-4868-b982-733dec9f46ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"9359c632ab98f7bc490691ddbd2b9e500d4b3e5dd810dbd53fe7ccf633aa4e6c\"" Dec 16 09:45:08.394248 containerd[1492]: time="2024-12-16T09:45:08.394226751Z" level=info msg="CreateContainer within sandbox \"9359c632ab98f7bc490691ddbd2b9e500d4b3e5dd810dbd53fe7ccf633aa4e6c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 09:45:08.412516 containerd[1492]: time="2024-12-16T09:45:08.412473029Z" level=info msg="CreateContainer within sandbox \"9359c632ab98f7bc490691ddbd2b9e500d4b3e5dd810dbd53fe7ccf633aa4e6c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8d35f32b12566fd589db399c62affcfcc08ca071f397197d3db854eec028fa23\"" Dec 16 09:45:08.413172 containerd[1492]: time="2024-12-16T09:45:08.413006032Z" level=info msg="StartContainer for \"8d35f32b12566fd589db399c62affcfcc08ca071f397197d3db854eec028fa23\"" Dec 16 09:45:08.439617 systemd[1]: Started cri-containerd-8d35f32b12566fd589db399c62affcfcc08ca071f397197d3db854eec028fa23.scope - libcontainer container 8d35f32b12566fd589db399c62affcfcc08ca071f397197d3db854eec028fa23. Dec 16 09:45:08.475232 containerd[1492]: time="2024-12-16T09:45:08.474806934Z" level=info msg="StartContainer for \"8d35f32b12566fd589db399c62affcfcc08ca071f397197d3db854eec028fa23\" returns successfully" Dec 16 09:45:09.467500 kubelet[2819]: I1216 09:45:09.467184 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vrgfp" podStartSLOduration=2.4671666930000002 podStartE2EDuration="2.467166693s" podCreationTimestamp="2024-12-16 09:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:45:09.466878636 +0000 UTC m=+16.180183109" watchObservedRunningTime="2024-12-16 09:45:09.467166693 +0000 UTC m=+16.180471176" Dec 16 09:45:10.129066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4184903471.mount: Deactivated successfully. Dec 16 09:45:10.559644 containerd[1492]: time="2024-12-16T09:45:10.559576289Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:10.561027 containerd[1492]: time="2024-12-16T09:45:10.560975598Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764309" Dec 16 09:45:10.565064 containerd[1492]: time="2024-12-16T09:45:10.565024160Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:10.568556 containerd[1492]: time="2024-12-16T09:45:10.568509692Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:10.569896 containerd[1492]: time="2024-12-16T09:45:10.569560732Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.267447233s" Dec 16 09:45:10.569896 containerd[1492]: time="2024-12-16T09:45:10.569604463Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 16 09:45:10.574629 containerd[1492]: time="2024-12-16T09:45:10.574590174Z" level=info msg="CreateContainer within sandbox \"195dc45b2f8cbbc8bcefad466381f336d2a052faa666f4c8854ec0b0e094fd0b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 09:45:10.586304 containerd[1492]: time="2024-12-16T09:45:10.586266965Z" level=info msg="CreateContainer within sandbox \"195dc45b2f8cbbc8bcefad466381f336d2a052faa666f4c8854ec0b0e094fd0b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593\"" Dec 16 09:45:10.587404 containerd[1492]: time="2024-12-16T09:45:10.586721823Z" level=info msg="StartContainer for \"d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593\"" Dec 16 09:45:10.628657 systemd[1]: Started cri-containerd-d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593.scope - libcontainer container d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593. Dec 16 09:45:10.661688 containerd[1492]: time="2024-12-16T09:45:10.661634399Z" level=info msg="StartContainer for \"d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593\" returns successfully" Dec 16 09:45:13.729294 kubelet[2819]: I1216 09:45:13.728623 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-n5gzt" podStartSLOduration=4.456046285 podStartE2EDuration="6.728603678s" podCreationTimestamp="2024-12-16 09:45:07 +0000 UTC" firstStartedPulling="2024-12-16 09:45:08.300514078 +0000 UTC m=+15.013818561" lastFinishedPulling="2024-12-16 09:45:10.573071471 +0000 UTC m=+17.286375954" observedRunningTime="2024-12-16 09:45:11.472726479 +0000 UTC m=+18.186030952" watchObservedRunningTime="2024-12-16 09:45:13.728603678 +0000 UTC m=+20.441908152" Dec 16 09:45:13.731546 kubelet[2819]: I1216 09:45:13.730436 2819 topology_manager.go:215] "Topology Admit Handler" podUID="a08842b7-2383-49d6-9281-eb63646143c8" podNamespace="calico-system" podName="calico-typha-55798bc6-nl54n" Dec 16 09:45:13.749780 systemd[1]: Created slice kubepods-besteffort-poda08842b7_2383_49d6_9281_eb63646143c8.slice - libcontainer container kubepods-besteffort-poda08842b7_2383_49d6_9281_eb63646143c8.slice. Dec 16 09:45:13.811452 kubelet[2819]: I1216 09:45:13.811172 2819 topology_manager.go:215] "Topology Admit Handler" podUID="9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39" podNamespace="calico-system" podName="calico-node-xw5bf" Dec 16 09:45:13.818676 kubelet[2819]: I1216 09:45:13.818641 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a08842b7-2383-49d6-9281-eb63646143c8-tigera-ca-bundle\") pod \"calico-typha-55798bc6-nl54n\" (UID: \"a08842b7-2383-49d6-9281-eb63646143c8\") " pod="calico-system/calico-typha-55798bc6-nl54n" Dec 16 09:45:13.818818 kubelet[2819]: I1216 09:45:13.818682 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a08842b7-2383-49d6-9281-eb63646143c8-typha-certs\") pod \"calico-typha-55798bc6-nl54n\" (UID: \"a08842b7-2383-49d6-9281-eb63646143c8\") " pod="calico-system/calico-typha-55798bc6-nl54n" Dec 16 09:45:13.818818 kubelet[2819]: I1216 09:45:13.818699 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vcgl\" (UniqueName: \"kubernetes.io/projected/a08842b7-2383-49d6-9281-eb63646143c8-kube-api-access-2vcgl\") pod \"calico-typha-55798bc6-nl54n\" (UID: \"a08842b7-2383-49d6-9281-eb63646143c8\") " pod="calico-system/calico-typha-55798bc6-nl54n" Dec 16 09:45:13.822276 systemd[1]: Created slice kubepods-besteffort-pod9a65bac9_039b_43c9_a5e4_ecdfe9fdcc39.slice - libcontainer container kubepods-besteffort-pod9a65bac9_039b_43c9_a5e4_ecdfe9fdcc39.slice. Dec 16 09:45:13.919699 kubelet[2819]: I1216 09:45:13.919654 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-tigera-ca-bundle\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.919699 kubelet[2819]: I1216 09:45:13.919692 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-var-lib-calico\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.919943 kubelet[2819]: I1216 09:45:13.919726 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-var-run-calico\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.919943 kubelet[2819]: I1216 09:45:13.919740 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-flexvol-driver-host\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.919943 kubelet[2819]: I1216 09:45:13.919757 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-policysync\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.919943 kubelet[2819]: I1216 09:45:13.919771 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-cni-log-dir\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.919943 kubelet[2819]: I1216 09:45:13.919784 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-node-certs\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.921059 kubelet[2819]: I1216 09:45:13.919815 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-cni-bin-dir\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.921059 kubelet[2819]: I1216 09:45:13.919827 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-cni-net-dir\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.921059 kubelet[2819]: I1216 09:45:13.919854 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhtc7\" (UniqueName: \"kubernetes.io/projected/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-kube-api-access-jhtc7\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.921059 kubelet[2819]: I1216 09:45:13.919877 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-lib-modules\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.921059 kubelet[2819]: I1216 09:45:13.919890 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39-xtables-lock\") pod \"calico-node-xw5bf\" (UID: \"9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39\") " pod="calico-system/calico-node-xw5bf" Dec 16 09:45:13.941998 kubelet[2819]: I1216 09:45:13.939872 2819 topology_manager.go:215] "Topology Admit Handler" podUID="4a2c894e-6c25-40a0-9a45-3b04bb3fe827" podNamespace="calico-system" podName="csi-node-driver-l5q54" Dec 16 09:45:13.941998 kubelet[2819]: E1216 09:45:13.940162 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5q54" podUID="4a2c894e-6c25-40a0-9a45-3b04bb3fe827" Dec 16 09:45:14.020768 kubelet[2819]: I1216 09:45:14.020658 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a2c894e-6c25-40a0-9a45-3b04bb3fe827-kubelet-dir\") pod \"csi-node-driver-l5q54\" (UID: \"4a2c894e-6c25-40a0-9a45-3b04bb3fe827\") " pod="calico-system/csi-node-driver-l5q54" Dec 16 09:45:14.020768 kubelet[2819]: I1216 09:45:14.020702 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4a2c894e-6c25-40a0-9a45-3b04bb3fe827-socket-dir\") pod \"csi-node-driver-l5q54\" (UID: \"4a2c894e-6c25-40a0-9a45-3b04bb3fe827\") " pod="calico-system/csi-node-driver-l5q54" Dec 16 09:45:14.020768 kubelet[2819]: I1216 09:45:14.020732 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4a2c894e-6c25-40a0-9a45-3b04bb3fe827-varrun\") pod \"csi-node-driver-l5q54\" (UID: \"4a2c894e-6c25-40a0-9a45-3b04bb3fe827\") " pod="calico-system/csi-node-driver-l5q54" Dec 16 09:45:14.020768 kubelet[2819]: I1216 09:45:14.020762 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4a2c894e-6c25-40a0-9a45-3b04bb3fe827-registration-dir\") pod \"csi-node-driver-l5q54\" (UID: \"4a2c894e-6c25-40a0-9a45-3b04bb3fe827\") " pod="calico-system/csi-node-driver-l5q54" Dec 16 09:45:14.020955 kubelet[2819]: I1216 09:45:14.020799 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nslkm\" (UniqueName: \"kubernetes.io/projected/4a2c894e-6c25-40a0-9a45-3b04bb3fe827-kube-api-access-nslkm\") pod \"csi-node-driver-l5q54\" (UID: \"4a2c894e-6c25-40a0-9a45-3b04bb3fe827\") " pod="calico-system/csi-node-driver-l5q54" Dec 16 09:45:14.023904 kubelet[2819]: E1216 09:45:14.023883 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.024142 kubelet[2819]: W1216 09:45:14.024024 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.024142 kubelet[2819]: E1216 09:45:14.024057 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.024534 kubelet[2819]: E1216 09:45:14.024416 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.024534 kubelet[2819]: W1216 09:45:14.024428 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.024604 kubelet[2819]: E1216 09:45:14.024538 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.024743 kubelet[2819]: E1216 09:45:14.024730 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.024875 kubelet[2819]: W1216 09:45:14.024787 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.024875 kubelet[2819]: E1216 09:45:14.024835 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.025890 kubelet[2819]: E1216 09:45:14.025012 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.025996 kubelet[2819]: W1216 09:45:14.025981 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.026133 kubelet[2819]: E1216 09:45:14.026087 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.026390 kubelet[2819]: E1216 09:45:14.026290 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.026390 kubelet[2819]: W1216 09:45:14.026302 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.026390 kubelet[2819]: E1216 09:45:14.026332 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.026663 kubelet[2819]: E1216 09:45:14.026574 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.026663 kubelet[2819]: W1216 09:45:14.026584 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.026663 kubelet[2819]: E1216 09:45:14.026611 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.026833 kubelet[2819]: E1216 09:45:14.026819 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.026884 kubelet[2819]: W1216 09:45:14.026874 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.026998 kubelet[2819]: E1216 09:45:14.026971 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.027459 kubelet[2819]: E1216 09:45:14.027328 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.027459 kubelet[2819]: W1216 09:45:14.027339 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.027459 kubelet[2819]: E1216 09:45:14.027380 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.027689 kubelet[2819]: E1216 09:45:14.027677 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.027761 kubelet[2819]: W1216 09:45:14.027735 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.027979 kubelet[2819]: E1216 09:45:14.027935 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.029755 kubelet[2819]: E1216 09:45:14.029676 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.029755 kubelet[2819]: W1216 09:45:14.029688 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.029853 kubelet[2819]: E1216 09:45:14.029835 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.030093 kubelet[2819]: E1216 09:45:14.030015 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.030093 kubelet[2819]: W1216 09:45:14.030026 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.030277 kubelet[2819]: E1216 09:45:14.030183 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.030395 kubelet[2819]: E1216 09:45:14.030383 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.030743 kubelet[2819]: W1216 09:45:14.030440 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.030743 kubelet[2819]: E1216 09:45:14.030456 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.031166 kubelet[2819]: E1216 09:45:14.031151 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.031244 kubelet[2819]: W1216 09:45:14.031233 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.031295 kubelet[2819]: E1216 09:45:14.031285 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.036295 kubelet[2819]: E1216 09:45:14.036251 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.036455 kubelet[2819]: W1216 09:45:14.036440 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.036717 kubelet[2819]: E1216 09:45:14.036700 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.043984 kubelet[2819]: E1216 09:45:14.043964 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.045472 kubelet[2819]: W1216 09:45:14.045410 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.045472 kubelet[2819]: E1216 09:45:14.045443 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.060183 containerd[1492]: time="2024-12-16T09:45:14.060060069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55798bc6-nl54n,Uid:a08842b7-2383-49d6-9281-eb63646143c8,Namespace:calico-system,Attempt:0,}" Dec 16 09:45:14.085363 containerd[1492]: time="2024-12-16T09:45:14.085239495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:45:14.085561 containerd[1492]: time="2024-12-16T09:45:14.085393924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:45:14.085561 containerd[1492]: time="2024-12-16T09:45:14.085440381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:14.085666 containerd[1492]: time="2024-12-16T09:45:14.085597143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:14.109491 systemd[1]: Started cri-containerd-1e460cf213893470a2b4d70cfeb871fab8768f9f223ef34d94af66e368c45de8.scope - libcontainer container 1e460cf213893470a2b4d70cfeb871fab8768f9f223ef34d94af66e368c45de8. Dec 16 09:45:14.122018 kubelet[2819]: E1216 09:45:14.121953 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.122018 kubelet[2819]: W1216 09:45:14.121987 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.122190 kubelet[2819]: E1216 09:45:14.122004 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.123686 kubelet[2819]: E1216 09:45:14.123515 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.123686 kubelet[2819]: W1216 09:45:14.123566 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.123686 kubelet[2819]: E1216 09:45:14.123577 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.124896 kubelet[2819]: E1216 09:45:14.124585 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.124896 kubelet[2819]: W1216 09:45:14.124598 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.124896 kubelet[2819]: E1216 09:45:14.124607 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.124896 kubelet[2819]: E1216 09:45:14.124843 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.124896 kubelet[2819]: W1216 09:45:14.124851 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.124896 kubelet[2819]: E1216 09:45:14.124860 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.125058 kubelet[2819]: E1216 09:45:14.125048 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.125058 kubelet[2819]: W1216 09:45:14.125055 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.125104 kubelet[2819]: E1216 09:45:14.125063 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.125642 kubelet[2819]: E1216 09:45:14.125496 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.125642 kubelet[2819]: W1216 09:45:14.125509 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.125642 kubelet[2819]: E1216 09:45:14.125518 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.126311 kubelet[2819]: E1216 09:45:14.126126 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.126311 kubelet[2819]: W1216 09:45:14.126138 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.126311 kubelet[2819]: E1216 09:45:14.126147 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.126547 kubelet[2819]: E1216 09:45:14.126392 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.126547 kubelet[2819]: W1216 09:45:14.126401 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.126547 kubelet[2819]: E1216 09:45:14.126409 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.127174 kubelet[2819]: E1216 09:45:14.127156 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.127174 kubelet[2819]: W1216 09:45:14.127169 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.127371 kubelet[2819]: E1216 09:45:14.127178 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.132883 kubelet[2819]: E1216 09:45:14.132843 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.133134 kubelet[2819]: W1216 09:45:14.132945 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.133134 kubelet[2819]: E1216 09:45:14.133002 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.134395 kubelet[2819]: E1216 09:45:14.133791 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.134395 kubelet[2819]: W1216 09:45:14.133803 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.134395 kubelet[2819]: E1216 09:45:14.133813 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.134395 kubelet[2819]: E1216 09:45:14.134308 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.134395 kubelet[2819]: W1216 09:45:14.134319 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.134395 kubelet[2819]: E1216 09:45:14.134327 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.134631 kubelet[2819]: E1216 09:45:14.134604 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.134631 kubelet[2819]: W1216 09:45:14.134639 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.135443 kubelet[2819]: E1216 09:45:14.134649 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.135865 kubelet[2819]: E1216 09:45:14.135843 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.135865 kubelet[2819]: W1216 09:45:14.135859 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.135945 kubelet[2819]: E1216 09:45:14.135869 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.136127 kubelet[2819]: E1216 09:45:14.136105 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.136127 kubelet[2819]: W1216 09:45:14.136121 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.136127 kubelet[2819]: E1216 09:45:14.136131 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.136598 kubelet[2819]: E1216 09:45:14.136402 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.136598 kubelet[2819]: W1216 09:45:14.136425 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.136598 kubelet[2819]: E1216 09:45:14.136434 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.138058 kubelet[2819]: E1216 09:45:14.137913 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.138058 kubelet[2819]: W1216 09:45:14.137937 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.138058 kubelet[2819]: E1216 09:45:14.137946 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.138374 kubelet[2819]: E1216 09:45:14.138179 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.138374 kubelet[2819]: W1216 09:45:14.138191 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.138374 kubelet[2819]: E1216 09:45:14.138199 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.138593 kubelet[2819]: E1216 09:45:14.138439 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.138593 kubelet[2819]: W1216 09:45:14.138499 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.138593 kubelet[2819]: E1216 09:45:14.138509 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.139006 kubelet[2819]: E1216 09:45:14.138956 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.139006 kubelet[2819]: W1216 09:45:14.138965 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.139006 kubelet[2819]: E1216 09:45:14.138974 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.140370 kubelet[2819]: E1216 09:45:14.139211 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.140370 kubelet[2819]: W1216 09:45:14.140141 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.140370 kubelet[2819]: E1216 09:45:14.140153 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.140456 kubelet[2819]: E1216 09:45:14.140397 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.140456 kubelet[2819]: W1216 09:45:14.140406 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.140456 kubelet[2819]: E1216 09:45:14.140415 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.141062 containerd[1492]: time="2024-12-16T09:45:14.140833300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xw5bf,Uid:9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39,Namespace:calico-system,Attempt:0,}" Dec 16 09:45:14.141787 kubelet[2819]: E1216 09:45:14.141304 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.141787 kubelet[2819]: W1216 09:45:14.141318 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.141787 kubelet[2819]: E1216 09:45:14.141327 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.141962 kubelet[2819]: E1216 09:45:14.141678 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.142005 kubelet[2819]: W1216 09:45:14.141985 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.142005 kubelet[2819]: E1216 09:45:14.141999 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.142833 kubelet[2819]: E1216 09:45:14.142804 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.142833 kubelet[2819]: W1216 09:45:14.142821 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.142833 kubelet[2819]: E1216 09:45:14.142831 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.156522 kubelet[2819]: E1216 09:45:14.156496 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:14.156739 kubelet[2819]: W1216 09:45:14.156645 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:14.156739 kubelet[2819]: E1216 09:45:14.156669 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:14.181810 containerd[1492]: time="2024-12-16T09:45:14.177293077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:45:14.181810 containerd[1492]: time="2024-12-16T09:45:14.177428159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:45:14.181810 containerd[1492]: time="2024-12-16T09:45:14.177484223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:14.181810 containerd[1492]: time="2024-12-16T09:45:14.180310608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:14.206560 systemd[1]: Started cri-containerd-89213f0e53bed3cf618b3e2a009011813d5c28e628ba134638e9c8c478daf851.scope - libcontainer container 89213f0e53bed3cf618b3e2a009011813d5c28e628ba134638e9c8c478daf851. Dec 16 09:45:14.209007 containerd[1492]: time="2024-12-16T09:45:14.208937849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55798bc6-nl54n,Uid:a08842b7-2383-49d6-9281-eb63646143c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e460cf213893470a2b4d70cfeb871fab8768f9f223ef34d94af66e368c45de8\"" Dec 16 09:45:14.215833 containerd[1492]: time="2024-12-16T09:45:14.215706421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 16 09:45:14.243008 containerd[1492]: time="2024-12-16T09:45:14.242593674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xw5bf,Uid:9a65bac9-039b-43c9-a5e4-ecdfe9fdcc39,Namespace:calico-system,Attempt:0,} returns sandbox id \"89213f0e53bed3cf618b3e2a009011813d5c28e628ba134638e9c8c478daf851\"" Dec 16 09:45:15.386434 kubelet[2819]: E1216 09:45:15.384621 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5q54" podUID="4a2c894e-6c25-40a0-9a45-3b04bb3fe827" Dec 16 09:45:15.901797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686004909.mount: Deactivated successfully. Dec 16 09:45:16.826379 containerd[1492]: time="2024-12-16T09:45:16.826213379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:16.827233 containerd[1492]: time="2024-12-16T09:45:16.826954353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 16 09:45:16.828382 containerd[1492]: time="2024-12-16T09:45:16.827862928Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:16.830087 containerd[1492]: time="2024-12-16T09:45:16.830046283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:16.830958 containerd[1492]: time="2024-12-16T09:45:16.830586892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.614817685s" Dec 16 09:45:16.830958 containerd[1492]: time="2024-12-16T09:45:16.830614464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 16 09:45:16.833462 containerd[1492]: time="2024-12-16T09:45:16.833435870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 16 09:45:16.844200 containerd[1492]: time="2024-12-16T09:45:16.844168001Z" level=info msg="CreateContainer within sandbox \"1e460cf213893470a2b4d70cfeb871fab8768f9f223ef34d94af66e368c45de8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 09:45:16.856784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount450142628.mount: Deactivated successfully. Dec 16 09:45:16.860356 containerd[1492]: time="2024-12-16T09:45:16.860310460Z" level=info msg="CreateContainer within sandbox \"1e460cf213893470a2b4d70cfeb871fab8768f9f223ef34d94af66e368c45de8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6ae47499285064422b05980f8f024ddde16bfa9e583ae25f41f4d78225bc9579\"" Dec 16 09:45:16.861012 containerd[1492]: time="2024-12-16T09:45:16.860961375Z" level=info msg="StartContainer for \"6ae47499285064422b05980f8f024ddde16bfa9e583ae25f41f4d78225bc9579\"" Dec 16 09:45:16.906489 systemd[1]: Started cri-containerd-6ae47499285064422b05980f8f024ddde16bfa9e583ae25f41f4d78225bc9579.scope - libcontainer container 6ae47499285064422b05980f8f024ddde16bfa9e583ae25f41f4d78225bc9579. Dec 16 09:45:16.948629 containerd[1492]: time="2024-12-16T09:45:16.948588086Z" level=info msg="StartContainer for \"6ae47499285064422b05980f8f024ddde16bfa9e583ae25f41f4d78225bc9579\" returns successfully" Dec 16 09:45:17.388283 kubelet[2819]: E1216 09:45:17.387502 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5q54" podUID="4a2c894e-6c25-40a0-9a45-3b04bb3fe827" Dec 16 09:45:17.498970 kubelet[2819]: I1216 09:45:17.497826 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55798bc6-nl54n" podStartSLOduration=1.8804583849999998 podStartE2EDuration="4.497808054s" podCreationTimestamp="2024-12-16 09:45:13 +0000 UTC" firstStartedPulling="2024-12-16 09:45:14.214268066 +0000 UTC m=+20.927572540" lastFinishedPulling="2024-12-16 09:45:16.831617736 +0000 UTC m=+23.544922209" observedRunningTime="2024-12-16 09:45:17.497557156 +0000 UTC m=+24.210861689" watchObservedRunningTime="2024-12-16 09:45:17.497808054 +0000 UTC m=+24.211112548" Dec 16 09:45:17.537054 kubelet[2819]: E1216 09:45:17.537011 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.537054 kubelet[2819]: W1216 09:45:17.537042 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.537274 kubelet[2819]: E1216 09:45:17.537067 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.537441 kubelet[2819]: E1216 09:45:17.537417 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.537488 kubelet[2819]: W1216 09:45:17.537434 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.537488 kubelet[2819]: E1216 09:45:17.537476 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.537768 kubelet[2819]: E1216 09:45:17.537746 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.537768 kubelet[2819]: W1216 09:45:17.537763 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.537839 kubelet[2819]: E1216 09:45:17.537775 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.538083 kubelet[2819]: E1216 09:45:17.538057 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.538083 kubelet[2819]: W1216 09:45:17.538073 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.538178 kubelet[2819]: E1216 09:45:17.538085 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.538364 kubelet[2819]: E1216 09:45:17.538317 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.538364 kubelet[2819]: W1216 09:45:17.538333 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.538503 kubelet[2819]: E1216 09:45:17.538401 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.538619 kubelet[2819]: E1216 09:45:17.538604 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.538619 kubelet[2819]: W1216 09:45:17.538618 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.538619 kubelet[2819]: E1216 09:45:17.538628 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.539307 kubelet[2819]: E1216 09:45:17.538814 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.539307 kubelet[2819]: W1216 09:45:17.538822 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.539307 kubelet[2819]: E1216 09:45:17.538831 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.539307 kubelet[2819]: E1216 09:45:17.539095 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.539307 kubelet[2819]: W1216 09:45:17.539104 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.539307 kubelet[2819]: E1216 09:45:17.539113 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.539307 kubelet[2819]: E1216 09:45:17.539299 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.539307 kubelet[2819]: W1216 09:45:17.539307 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.539735 kubelet[2819]: E1216 09:45:17.539315 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.539735 kubelet[2819]: E1216 09:45:17.539537 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.539735 kubelet[2819]: W1216 09:45:17.539546 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.539735 kubelet[2819]: E1216 09:45:17.539556 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.539959 kubelet[2819]: E1216 09:45:17.539776 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.539959 kubelet[2819]: W1216 09:45:17.539785 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.539959 kubelet[2819]: E1216 09:45:17.539795 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.540120 kubelet[2819]: E1216 09:45:17.539995 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.540120 kubelet[2819]: W1216 09:45:17.540004 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.540120 kubelet[2819]: E1216 09:45:17.540013 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.540302 kubelet[2819]: E1216 09:45:17.540190 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.540302 kubelet[2819]: W1216 09:45:17.540198 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.540302 kubelet[2819]: E1216 09:45:17.540206 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.540544 kubelet[2819]: E1216 09:45:17.540409 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.540544 kubelet[2819]: W1216 09:45:17.540418 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.540544 kubelet[2819]: E1216 09:45:17.540426 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.540667 kubelet[2819]: E1216 09:45:17.540613 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.540667 kubelet[2819]: W1216 09:45:17.540621 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.540667 kubelet[2819]: E1216 09:45:17.540630 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.547986 kubelet[2819]: E1216 09:45:17.547963 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.547986 kubelet[2819]: W1216 09:45:17.547980 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.548069 kubelet[2819]: E1216 09:45:17.547995 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.548316 kubelet[2819]: E1216 09:45:17.548294 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.548316 kubelet[2819]: W1216 09:45:17.548310 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.548420 kubelet[2819]: E1216 09:45:17.548327 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.548663 kubelet[2819]: E1216 09:45:17.548639 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.548663 kubelet[2819]: W1216 09:45:17.548656 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.548728 kubelet[2819]: E1216 09:45:17.548670 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.548953 kubelet[2819]: E1216 09:45:17.548918 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.548953 kubelet[2819]: W1216 09:45:17.548946 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.549313 kubelet[2819]: E1216 09:45:17.548978 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.549313 kubelet[2819]: E1216 09:45:17.549217 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.549313 kubelet[2819]: W1216 09:45:17.549239 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.549313 kubelet[2819]: E1216 09:45:17.549256 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.549523 kubelet[2819]: E1216 09:45:17.549510 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.549523 kubelet[2819]: W1216 09:45:17.549520 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.549689 kubelet[2819]: E1216 09:45:17.549608 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.549751 kubelet[2819]: E1216 09:45:17.549727 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.549751 kubelet[2819]: W1216 09:45:17.549747 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.549919 kubelet[2819]: E1216 09:45:17.549846 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.549983 kubelet[2819]: E1216 09:45:17.549972 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.550011 kubelet[2819]: W1216 09:45:17.549983 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.550164 kubelet[2819]: E1216 09:45:17.550091 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.550278 kubelet[2819]: E1216 09:45:17.550256 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.550278 kubelet[2819]: W1216 09:45:17.550272 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.550338 kubelet[2819]: E1216 09:45:17.550299 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.550913 kubelet[2819]: E1216 09:45:17.550891 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.550913 kubelet[2819]: W1216 09:45:17.550906 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.551019 kubelet[2819]: E1216 09:45:17.550947 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.551339 kubelet[2819]: E1216 09:45:17.551206 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.551339 kubelet[2819]: W1216 09:45:17.551220 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.551339 kubelet[2819]: E1216 09:45:17.551237 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.551705 kubelet[2819]: E1216 09:45:17.551682 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.551705 kubelet[2819]: W1216 09:45:17.551696 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.551965 kubelet[2819]: E1216 09:45:17.551789 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.551965 kubelet[2819]: E1216 09:45:17.551944 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.551965 kubelet[2819]: W1216 09:45:17.551954 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.551965 kubelet[2819]: E1216 09:45:17.551972 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.552256 kubelet[2819]: E1216 09:45:17.552144 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.552256 kubelet[2819]: W1216 09:45:17.552152 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.552256 kubelet[2819]: E1216 09:45:17.552168 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.552539 kubelet[2819]: E1216 09:45:17.552385 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.552539 kubelet[2819]: W1216 09:45:17.552394 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.552539 kubelet[2819]: E1216 09:45:17.552411 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.552871 kubelet[2819]: E1216 09:45:17.552852 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.552871 kubelet[2819]: W1216 09:45:17.552865 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.552980 kubelet[2819]: E1216 09:45:17.552880 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.553245 kubelet[2819]: E1216 09:45:17.553226 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.553245 kubelet[2819]: W1216 09:45:17.553239 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.553395 kubelet[2819]: E1216 09:45:17.553253 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:17.553508 kubelet[2819]: E1216 09:45:17.553480 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:17.553508 kubelet[2819]: W1216 09:45:17.553493 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:17.553508 kubelet[2819]: E1216 09:45:17.553503 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.488458 kubelet[2819]: I1216 09:45:18.487846 2819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 09:45:18.549240 kubelet[2819]: E1216 09:45:18.549195 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.549240 kubelet[2819]: W1216 09:45:18.549231 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.549438 kubelet[2819]: E1216 09:45:18.549262 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.549670 kubelet[2819]: E1216 09:45:18.549648 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.549670 kubelet[2819]: W1216 09:45:18.549666 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.549771 kubelet[2819]: E1216 09:45:18.549680 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.550299 kubelet[2819]: E1216 09:45:18.550276 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.550299 kubelet[2819]: W1216 09:45:18.550291 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.550547 kubelet[2819]: E1216 09:45:18.550306 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.550890 kubelet[2819]: E1216 09:45:18.550720 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.550890 kubelet[2819]: W1216 09:45:18.550734 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.550890 kubelet[2819]: E1216 09:45:18.550785 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.551315 kubelet[2819]: E1216 09:45:18.551173 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.551315 kubelet[2819]: W1216 09:45:18.551185 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.551315 kubelet[2819]: E1216 09:45:18.551196 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.551959 kubelet[2819]: E1216 09:45:18.551625 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.551959 kubelet[2819]: W1216 09:45:18.551639 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.551959 kubelet[2819]: E1216 09:45:18.551650 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.552110 kubelet[2819]: E1216 09:45:18.552097 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.552335 kubelet[2819]: W1216 09:45:18.552207 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.552335 kubelet[2819]: E1216 09:45:18.552222 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.552743 kubelet[2819]: E1216 09:45:18.552631 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.552743 kubelet[2819]: W1216 09:45:18.552643 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.552743 kubelet[2819]: E1216 09:45:18.552654 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.553152 kubelet[2819]: E1216 09:45:18.553038 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.553152 kubelet[2819]: W1216 09:45:18.553050 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.553152 kubelet[2819]: E1216 09:45:18.553063 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.553853 kubelet[2819]: E1216 09:45:18.553564 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.553853 kubelet[2819]: W1216 09:45:18.553577 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.553853 kubelet[2819]: E1216 09:45:18.553587 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.553853 kubelet[2819]: E1216 09:45:18.553785 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.553853 kubelet[2819]: W1216 09:45:18.553795 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.553853 kubelet[2819]: E1216 09:45:18.553805 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.554727 kubelet[2819]: E1216 09:45:18.554325 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.554727 kubelet[2819]: W1216 09:45:18.554337 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.554727 kubelet[2819]: E1216 09:45:18.554362 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.555275 kubelet[2819]: E1216 09:45:18.555145 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.555275 kubelet[2819]: W1216 09:45:18.555189 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.555275 kubelet[2819]: E1216 09:45:18.555200 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.556049 kubelet[2819]: E1216 09:45:18.555943 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.556049 kubelet[2819]: W1216 09:45:18.555957 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.556049 kubelet[2819]: E1216 09:45:18.555968 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.556845 kubelet[2819]: E1216 09:45:18.556652 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.556845 kubelet[2819]: W1216 09:45:18.556683 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.556845 kubelet[2819]: E1216 09:45:18.556700 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.557733 kubelet[2819]: E1216 09:45:18.557713 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.557733 kubelet[2819]: W1216 09:45:18.557733 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.557810 kubelet[2819]: E1216 09:45:18.557747 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.558760 kubelet[2819]: E1216 09:45:18.558724 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.558760 kubelet[2819]: W1216 09:45:18.558740 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.559194 kubelet[2819]: E1216 09:45:18.558999 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.560130 kubelet[2819]: E1216 09:45:18.560083 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.560130 kubelet[2819]: W1216 09:45:18.560119 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.560192 kubelet[2819]: E1216 09:45:18.560149 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.560986 kubelet[2819]: E1216 09:45:18.560847 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.560986 kubelet[2819]: W1216 09:45:18.560878 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.561476 kubelet[2819]: E1216 09:45:18.561449 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.562006 kubelet[2819]: E1216 09:45:18.561909 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.562006 kubelet[2819]: W1216 09:45:18.561926 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.562557 kubelet[2819]: E1216 09:45:18.562528 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.563326 kubelet[2819]: E1216 09:45:18.563228 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.563326 kubelet[2819]: W1216 09:45:18.563246 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.563326 kubelet[2819]: E1216 09:45:18.563264 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.565397 kubelet[2819]: E1216 09:45:18.564732 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.565397 kubelet[2819]: W1216 09:45:18.564749 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.565397 kubelet[2819]: E1216 09:45:18.564765 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.566253 kubelet[2819]: E1216 09:45:18.566217 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.566253 kubelet[2819]: W1216 09:45:18.566236 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.566331 kubelet[2819]: E1216 09:45:18.566254 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.566945 kubelet[2819]: E1216 09:45:18.566911 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.567386 kubelet[2819]: W1216 09:45:18.567332 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.567504 kubelet[2819]: E1216 09:45:18.567482 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.568562 kubelet[2819]: E1216 09:45:18.568534 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.568562 kubelet[2819]: W1216 09:45:18.568554 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.569011 kubelet[2819]: E1216 09:45:18.568984 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.569622 kubelet[2819]: E1216 09:45:18.569589 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.569622 kubelet[2819]: W1216 09:45:18.569621 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.569862 kubelet[2819]: E1216 09:45:18.569769 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.570640 kubelet[2819]: E1216 09:45:18.570603 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.570640 kubelet[2819]: W1216 09:45:18.570637 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.570715 kubelet[2819]: E1216 09:45:18.570657 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.571892 kubelet[2819]: E1216 09:45:18.571779 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.571892 kubelet[2819]: W1216 09:45:18.571796 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.572355 kubelet[2819]: E1216 09:45:18.572240 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.573680 kubelet[2819]: E1216 09:45:18.573644 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.573680 kubelet[2819]: W1216 09:45:18.573663 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.574075 kubelet[2819]: E1216 09:45:18.573966 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.574316 kubelet[2819]: E1216 09:45:18.574301 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.574316 kubelet[2819]: W1216 09:45:18.574314 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.574571 kubelet[2819]: E1216 09:45:18.574327 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.574571 kubelet[2819]: E1216 09:45:18.574521 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.574571 kubelet[2819]: W1216 09:45:18.574529 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.574571 kubelet[2819]: E1216 09:45:18.574536 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.575124 kubelet[2819]: E1216 09:45:18.575038 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.575124 kubelet[2819]: W1216 09:45:18.575049 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.575124 kubelet[2819]: E1216 09:45:18.575058 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.575681 kubelet[2819]: E1216 09:45:18.575627 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:45:18.575681 kubelet[2819]: W1216 09:45:18.575637 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:45:18.575681 kubelet[2819]: E1216 09:45:18.575648 2819 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:45:18.617624 containerd[1492]: time="2024-12-16T09:45:18.617561054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:18.618560 containerd[1492]: time="2024-12-16T09:45:18.618465383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 16 09:45:18.619387 containerd[1492]: time="2024-12-16T09:45:18.619299169Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:18.621050 containerd[1492]: time="2024-12-16T09:45:18.621007106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:18.621664 containerd[1492]: time="2024-12-16T09:45:18.621545322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.788083444s" Dec 16 09:45:18.621664 containerd[1492]: time="2024-12-16T09:45:18.621572232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 16 09:45:18.624152 containerd[1492]: time="2024-12-16T09:45:18.623534715Z" level=info msg="CreateContainer within sandbox \"89213f0e53bed3cf618b3e2a009011813d5c28e628ba134638e9c8c478daf851\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 09:45:18.636371 containerd[1492]: time="2024-12-16T09:45:18.636313671Z" level=info msg="CreateContainer within sandbox \"89213f0e53bed3cf618b3e2a009011813d5c28e628ba134638e9c8c478daf851\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0df027c478f341b2926ce9b54c34cd8fca578f9052b39cecfd941d5e9d7bfedd\"" Dec 16 09:45:18.637011 containerd[1492]: time="2024-12-16T09:45:18.636762278Z" level=info msg="StartContainer for \"0df027c478f341b2926ce9b54c34cd8fca578f9052b39cecfd941d5e9d7bfedd\"" Dec 16 09:45:18.672481 systemd[1]: Started cri-containerd-0df027c478f341b2926ce9b54c34cd8fca578f9052b39cecfd941d5e9d7bfedd.scope - libcontainer container 0df027c478f341b2926ce9b54c34cd8fca578f9052b39cecfd941d5e9d7bfedd. Dec 16 09:45:18.703285 containerd[1492]: time="2024-12-16T09:45:18.703248293Z" level=info msg="StartContainer for \"0df027c478f341b2926ce9b54c34cd8fca578f9052b39cecfd941d5e9d7bfedd\" returns successfully" Dec 16 09:45:18.718773 systemd[1]: cri-containerd-0df027c478f341b2926ce9b54c34cd8fca578f9052b39cecfd941d5e9d7bfedd.scope: Deactivated successfully. Dec 16 09:45:18.739748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0df027c478f341b2926ce9b54c34cd8fca578f9052b39cecfd941d5e9d7bfedd-rootfs.mount: Deactivated successfully. Dec 16 09:45:18.791717 containerd[1492]: time="2024-12-16T09:45:18.765601975Z" level=info msg="shim disconnected" id=0df027c478f341b2926ce9b54c34cd8fca578f9052b39cecfd941d5e9d7bfedd namespace=k8s.io Dec 16 09:45:18.791717 containerd[1492]: time="2024-12-16T09:45:18.791709312Z" level=warning msg="cleaning up after shim disconnected" id=0df027c478f341b2926ce9b54c34cd8fca578f9052b39cecfd941d5e9d7bfedd namespace=k8s.io Dec 16 09:45:18.791717 containerd[1492]: time="2024-12-16T09:45:18.791721986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:19.385799 kubelet[2819]: E1216 09:45:19.384570 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5q54" podUID="4a2c894e-6c25-40a0-9a45-3b04bb3fe827" Dec 16 09:45:19.492579 containerd[1492]: time="2024-12-16T09:45:19.492530707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 16 09:45:21.387016 kubelet[2819]: E1216 09:45:21.386923 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5q54" podUID="4a2c894e-6c25-40a0-9a45-3b04bb3fe827" Dec 16 09:45:23.386085 kubelet[2819]: E1216 09:45:23.386003 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5q54" podUID="4a2c894e-6c25-40a0-9a45-3b04bb3fe827" Dec 16 09:45:23.476993 kubelet[2819]: I1216 09:45:23.476733 2819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 09:45:24.544493 containerd[1492]: time="2024-12-16T09:45:24.544436272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:24.545687 containerd[1492]: time="2024-12-16T09:45:24.545640502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 16 09:45:24.546523 containerd[1492]: time="2024-12-16T09:45:24.546456566Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:24.549019 containerd[1492]: time="2024-12-16T09:45:24.548325587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:24.549019 containerd[1492]: time="2024-12-16T09:45:24.548907844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.056335862s" Dec 16 09:45:24.549019 containerd[1492]: time="2024-12-16T09:45:24.548931488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 16 09:45:24.551104 containerd[1492]: time="2024-12-16T09:45:24.551055666Z" level=info msg="CreateContainer within sandbox \"89213f0e53bed3cf618b3e2a009011813d5c28e628ba134638e9c8c478daf851\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 09:45:24.603583 containerd[1492]: time="2024-12-16T09:45:24.603535100Z" level=info msg="CreateContainer within sandbox \"89213f0e53bed3cf618b3e2a009011813d5c28e628ba134638e9c8c478daf851\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ed052910e635cc35e7d900225e9df976f8cd9c38159d7fd1d51ca50f1c272a72\"" Dec 16 09:45:24.604290 containerd[1492]: time="2024-12-16T09:45:24.604239355Z" level=info msg="StartContainer for \"ed052910e635cc35e7d900225e9df976f8cd9c38159d7fd1d51ca50f1c272a72\"" Dec 16 09:45:24.651488 systemd[1]: run-containerd-runc-k8s.io-ed052910e635cc35e7d900225e9df976f8cd9c38159d7fd1d51ca50f1c272a72-runc.8vKsSC.mount: Deactivated successfully. Dec 16 09:45:24.659818 systemd[1]: Started cri-containerd-ed052910e635cc35e7d900225e9df976f8cd9c38159d7fd1d51ca50f1c272a72.scope - libcontainer container ed052910e635cc35e7d900225e9df976f8cd9c38159d7fd1d51ca50f1c272a72. Dec 16 09:45:24.696933 containerd[1492]: time="2024-12-16T09:45:24.696862565Z" level=info msg="StartContainer for \"ed052910e635cc35e7d900225e9df976f8cd9c38159d7fd1d51ca50f1c272a72\" returns successfully" Dec 16 09:45:25.163589 systemd[1]: cri-containerd-ed052910e635cc35e7d900225e9df976f8cd9c38159d7fd1d51ca50f1c272a72.scope: Deactivated successfully. Dec 16 09:45:25.200331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed052910e635cc35e7d900225e9df976f8cd9c38159d7fd1d51ca50f1c272a72-rootfs.mount: Deactivated successfully. Dec 16 09:45:25.204338 kubelet[2819]: I1216 09:45:25.203548 2819 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 16 09:45:25.216789 containerd[1492]: time="2024-12-16T09:45:25.216703276Z" level=info msg="shim disconnected" id=ed052910e635cc35e7d900225e9df976f8cd9c38159d7fd1d51ca50f1c272a72 namespace=k8s.io Dec 16 09:45:25.216789 containerd[1492]: time="2024-12-16T09:45:25.216771874Z" level=warning msg="cleaning up after shim disconnected" id=ed052910e635cc35e7d900225e9df976f8cd9c38159d7fd1d51ca50f1c272a72 namespace=k8s.io Dec 16 09:45:25.216789 containerd[1492]: time="2024-12-16T09:45:25.216782183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:25.236238 kubelet[2819]: I1216 09:45:25.235820 2819 topology_manager.go:215] "Topology Admit Handler" podUID="7a796b60-38ab-4508-af1f-addf8c793894" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gcrd7" Dec 16 09:45:25.240035 kubelet[2819]: I1216 09:45:25.239585 2819 topology_manager.go:215] "Topology Admit Handler" podUID="001a7c98-2357-4161-a94b-51529e6ad491" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rcgjq" Dec 16 09:45:25.242761 kubelet[2819]: I1216 09:45:25.242743 2819 topology_manager.go:215] "Topology Admit Handler" podUID="165fee54-0a8a-430c-9a31-1d0a39e3bdfc" podNamespace="calico-system" podName="calico-kube-controllers-c8fd9447d-sjkp7" Dec 16 09:45:25.248454 systemd[1]: Created slice kubepods-burstable-pod7a796b60_38ab_4508_af1f_addf8c793894.slice - libcontainer container kubepods-burstable-pod7a796b60_38ab_4508_af1f_addf8c793894.slice. Dec 16 09:45:25.250454 kubelet[2819]: I1216 09:45:25.249968 2819 topology_manager.go:215] "Topology Admit Handler" podUID="2776468d-7227-4930-9fc3-99816559fe3c" podNamespace="calico-apiserver" podName="calico-apiserver-5444bbcc64-vwb7k" Dec 16 09:45:25.254740 kubelet[2819]: I1216 09:45:25.254630 2819 topology_manager.go:215] "Topology Admit Handler" podUID="95596b43-8688-4f06-b103-26a3c0d961e8" podNamespace="calico-apiserver" podName="calico-apiserver-5444bbcc64-nclhs" Dec 16 09:45:25.264709 systemd[1]: Created slice kubepods-burstable-pod001a7c98_2357_4161_a94b_51529e6ad491.slice - libcontainer container kubepods-burstable-pod001a7c98_2357_4161_a94b_51529e6ad491.slice. Dec 16 09:45:25.274446 systemd[1]: Created slice kubepods-besteffort-pod165fee54_0a8a_430c_9a31_1d0a39e3bdfc.slice - libcontainer container kubepods-besteffort-pod165fee54_0a8a_430c_9a31_1d0a39e3bdfc.slice. Dec 16 09:45:25.282949 systemd[1]: Created slice kubepods-besteffort-pod2776468d_7227_4930_9fc3_99816559fe3c.slice - libcontainer container kubepods-besteffort-pod2776468d_7227_4930_9fc3_99816559fe3c.slice. Dec 16 09:45:25.290398 systemd[1]: Created slice kubepods-besteffort-pod95596b43_8688_4f06_b103_26a3c0d961e8.slice - libcontainer container kubepods-besteffort-pod95596b43_8688_4f06_b103_26a3c0d961e8.slice. Dec 16 09:45:25.313455 kubelet[2819]: I1216 09:45:25.313410 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/001a7c98-2357-4161-a94b-51529e6ad491-config-volume\") pod \"coredns-7db6d8ff4d-rcgjq\" (UID: \"001a7c98-2357-4161-a94b-51529e6ad491\") " pod="kube-system/coredns-7db6d8ff4d-rcgjq" Dec 16 09:45:25.313608 kubelet[2819]: I1216 09:45:25.313591 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfvnj\" (UniqueName: \"kubernetes.io/projected/165fee54-0a8a-430c-9a31-1d0a39e3bdfc-kube-api-access-tfvnj\") pod \"calico-kube-controllers-c8fd9447d-sjkp7\" (UID: \"165fee54-0a8a-430c-9a31-1d0a39e3bdfc\") " pod="calico-system/calico-kube-controllers-c8fd9447d-sjkp7" Dec 16 09:45:25.313725 kubelet[2819]: I1216 09:45:25.313711 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/165fee54-0a8a-430c-9a31-1d0a39e3bdfc-tigera-ca-bundle\") pod \"calico-kube-controllers-c8fd9447d-sjkp7\" (UID: \"165fee54-0a8a-430c-9a31-1d0a39e3bdfc\") " pod="calico-system/calico-kube-controllers-c8fd9447d-sjkp7" Dec 16 09:45:25.313842 kubelet[2819]: I1216 09:45:25.313829 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fclm8\" (UniqueName: \"kubernetes.io/projected/001a7c98-2357-4161-a94b-51529e6ad491-kube-api-access-fclm8\") pod \"coredns-7db6d8ff4d-rcgjq\" (UID: \"001a7c98-2357-4161-a94b-51529e6ad491\") " pod="kube-system/coredns-7db6d8ff4d-rcgjq" Dec 16 09:45:25.314016 kubelet[2819]: I1216 09:45:25.314001 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsvbl\" (UniqueName: \"kubernetes.io/projected/95596b43-8688-4f06-b103-26a3c0d961e8-kube-api-access-dsvbl\") pod \"calico-apiserver-5444bbcc64-nclhs\" (UID: \"95596b43-8688-4f06-b103-26a3c0d961e8\") " pod="calico-apiserver/calico-apiserver-5444bbcc64-nclhs" Dec 16 09:45:25.314124 kubelet[2819]: I1216 09:45:25.314110 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a796b60-38ab-4508-af1f-addf8c793894-config-volume\") pod \"coredns-7db6d8ff4d-gcrd7\" (UID: \"7a796b60-38ab-4508-af1f-addf8c793894\") " pod="kube-system/coredns-7db6d8ff4d-gcrd7" Dec 16 09:45:25.314220 kubelet[2819]: I1216 09:45:25.314207 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knv8s\" (UniqueName: \"kubernetes.io/projected/7a796b60-38ab-4508-af1f-addf8c793894-kube-api-access-knv8s\") pod \"coredns-7db6d8ff4d-gcrd7\" (UID: \"7a796b60-38ab-4508-af1f-addf8c793894\") " pod="kube-system/coredns-7db6d8ff4d-gcrd7" Dec 16 09:45:25.314392 kubelet[2819]: I1216 09:45:25.314287 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/95596b43-8688-4f06-b103-26a3c0d961e8-calico-apiserver-certs\") pod \"calico-apiserver-5444bbcc64-nclhs\" (UID: \"95596b43-8688-4f06-b103-26a3c0d961e8\") " pod="calico-apiserver/calico-apiserver-5444bbcc64-nclhs" Dec 16 09:45:25.314392 kubelet[2819]: I1216 09:45:25.314309 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2776468d-7227-4930-9fc3-99816559fe3c-calico-apiserver-certs\") pod \"calico-apiserver-5444bbcc64-vwb7k\" (UID: \"2776468d-7227-4930-9fc3-99816559fe3c\") " pod="calico-apiserver/calico-apiserver-5444bbcc64-vwb7k" Dec 16 09:45:25.314392 kubelet[2819]: I1216 09:45:25.314324 2819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjtcc\" (UniqueName: \"kubernetes.io/projected/2776468d-7227-4930-9fc3-99816559fe3c-kube-api-access-mjtcc\") pod \"calico-apiserver-5444bbcc64-vwb7k\" (UID: \"2776468d-7227-4930-9fc3-99816559fe3c\") " pod="calico-apiserver/calico-apiserver-5444bbcc64-vwb7k" Dec 16 09:45:25.391262 systemd[1]: Created slice kubepods-besteffort-pod4a2c894e_6c25_40a0_9a45_3b04bb3fe827.slice - libcontainer container kubepods-besteffort-pod4a2c894e_6c25_40a0_9a45_3b04bb3fe827.slice. Dec 16 09:45:25.395610 containerd[1492]: time="2024-12-16T09:45:25.395572889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5q54,Uid:4a2c894e-6c25-40a0-9a45-3b04bb3fe827,Namespace:calico-system,Attempt:0,}" Dec 16 09:45:25.506226 containerd[1492]: time="2024-12-16T09:45:25.506138224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 16 09:45:25.560247 containerd[1492]: time="2024-12-16T09:45:25.560186686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gcrd7,Uid:7a796b60-38ab-4508-af1f-addf8c793894,Namespace:kube-system,Attempt:0,}" Dec 16 09:45:25.570362 containerd[1492]: time="2024-12-16T09:45:25.569556837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rcgjq,Uid:001a7c98-2357-4161-a94b-51529e6ad491,Namespace:kube-system,Attempt:0,}" Dec 16 09:45:25.570362 containerd[1492]: time="2024-12-16T09:45:25.570216710Z" level=error msg="Failed to destroy network for sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.574570 containerd[1492]: time="2024-12-16T09:45:25.574464343Z" level=error msg="encountered an error cleaning up failed sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.574570 containerd[1492]: time="2024-12-16T09:45:25.574509087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5q54,Uid:4a2c894e-6c25-40a0-9a45-3b04bb3fe827,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.575136 kubelet[2819]: E1216 09:45:25.574953 2819 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.575136 kubelet[2819]: E1216 09:45:25.575047 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5q54" Dec 16 09:45:25.575408 kubelet[2819]: E1216 09:45:25.575307 2819 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5q54" Dec 16 09:45:25.576365 kubelet[2819]: E1216 09:45:25.576194 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l5q54_calico-system(4a2c894e-6c25-40a0-9a45-3b04bb3fe827)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l5q54_calico-system(4a2c894e-6c25-40a0-9a45-3b04bb3fe827)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l5q54" podUID="4a2c894e-6c25-40a0-9a45-3b04bb3fe827" Dec 16 09:45:25.580787 containerd[1492]: time="2024-12-16T09:45:25.580692897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8fd9447d-sjkp7,Uid:165fee54-0a8a-430c-9a31-1d0a39e3bdfc,Namespace:calico-system,Attempt:0,}" Dec 16 09:45:25.593514 containerd[1492]: time="2024-12-16T09:45:25.591777992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5444bbcc64-vwb7k,Uid:2776468d-7227-4930-9fc3-99816559fe3c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 09:45:25.611612 containerd[1492]: time="2024-12-16T09:45:25.611392208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5444bbcc64-nclhs,Uid:95596b43-8688-4f06-b103-26a3c0d961e8,Namespace:calico-apiserver,Attempt:0,}" Dec 16 09:45:25.784763 containerd[1492]: time="2024-12-16T09:45:25.784598564Z" level=error msg="Failed to destroy network for sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.785466 containerd[1492]: time="2024-12-16T09:45:25.785432270Z" level=error msg="Failed to destroy network for sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.785903 containerd[1492]: time="2024-12-16T09:45:25.785862434Z" level=error msg="encountered an error cleaning up failed sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.785981 containerd[1492]: time="2024-12-16T09:45:25.785940139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5444bbcc64-vwb7k,Uid:2776468d-7227-4930-9fc3-99816559fe3c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.787389 kubelet[2819]: E1216 09:45:25.786223 2819 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.787389 kubelet[2819]: E1216 09:45:25.786280 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5444bbcc64-vwb7k" Dec 16 09:45:25.787389 kubelet[2819]: E1216 09:45:25.786300 2819 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5444bbcc64-vwb7k" Dec 16 09:45:25.787506 containerd[1492]: time="2024-12-16T09:45:25.786663520Z" level=error msg="encountered an error cleaning up failed sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.787506 containerd[1492]: time="2024-12-16T09:45:25.786923766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gcrd7,Uid:7a796b60-38ab-4508-af1f-addf8c793894,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.788530 kubelet[2819]: E1216 09:45:25.787206 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5444bbcc64-vwb7k_calico-apiserver(2776468d-7227-4930-9fc3-99816559fe3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5444bbcc64-vwb7k_calico-apiserver(2776468d-7227-4930-9fc3-99816559fe3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5444bbcc64-vwb7k" podUID="2776468d-7227-4930-9fc3-99816559fe3c" Dec 16 09:45:25.788530 kubelet[2819]: E1216 09:45:25.787293 2819 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.788530 kubelet[2819]: E1216 09:45:25.787320 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gcrd7" Dec 16 09:45:25.788708 kubelet[2819]: E1216 09:45:25.787334 2819 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gcrd7" Dec 16 09:45:25.788708 kubelet[2819]: E1216 09:45:25.788427 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gcrd7_kube-system(7a796b60-38ab-4508-af1f-addf8c793894)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gcrd7_kube-system(7a796b60-38ab-4508-af1f-addf8c793894)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gcrd7" podUID="7a796b60-38ab-4508-af1f-addf8c793894" Dec 16 09:45:25.801203 containerd[1492]: time="2024-12-16T09:45:25.801154476Z" level=error msg="Failed to destroy network for sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.801787 containerd[1492]: time="2024-12-16T09:45:25.801755450Z" level=error msg="encountered an error cleaning up failed sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.801856 containerd[1492]: time="2024-12-16T09:45:25.801805282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rcgjq,Uid:001a7c98-2357-4161-a94b-51529e6ad491,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.802069 kubelet[2819]: E1216 09:45:25.802029 2819 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.802141 kubelet[2819]: E1216 09:45:25.802084 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rcgjq" Dec 16 09:45:25.802141 kubelet[2819]: E1216 09:45:25.802102 2819 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rcgjq" Dec 16 09:45:25.802222 kubelet[2819]: E1216 09:45:25.802145 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rcgjq_kube-system(001a7c98-2357-4161-a94b-51529e6ad491)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rcgjq_kube-system(001a7c98-2357-4161-a94b-51529e6ad491)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rcgjq" podUID="001a7c98-2357-4161-a94b-51529e6ad491" Dec 16 09:45:25.835669 containerd[1492]: time="2024-12-16T09:45:25.835365015Z" level=error msg="Failed to destroy network for sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.836148 containerd[1492]: time="2024-12-16T09:45:25.836030679Z" level=error msg="encountered an error cleaning up failed sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.836148 containerd[1492]: time="2024-12-16T09:45:25.836103604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8fd9447d-sjkp7,Uid:165fee54-0a8a-430c-9a31-1d0a39e3bdfc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.836941 containerd[1492]: time="2024-12-16T09:45:25.836385200Z" level=error msg="Failed to destroy network for sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.837007 kubelet[2819]: E1216 09:45:25.836426 2819 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.837007 kubelet[2819]: E1216 09:45:25.836476 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8fd9447d-sjkp7" Dec 16 09:45:25.837007 kubelet[2819]: E1216 09:45:25.836507 2819 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8fd9447d-sjkp7" Dec 16 09:45:25.837134 kubelet[2819]: E1216 09:45:25.836543 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c8fd9447d-sjkp7_calico-system(165fee54-0a8a-430c-9a31-1d0a39e3bdfc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c8fd9447d-sjkp7_calico-system(165fee54-0a8a-430c-9a31-1d0a39e3bdfc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c8fd9447d-sjkp7" podUID="165fee54-0a8a-430c-9a31-1d0a39e3bdfc" Dec 16 09:45:25.837730 containerd[1492]: time="2024-12-16T09:45:25.837379487Z" level=error msg="encountered an error cleaning up failed sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.837730 containerd[1492]: time="2024-12-16T09:45:25.837452915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5444bbcc64-nclhs,Uid:95596b43-8688-4f06-b103-26a3c0d961e8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.837900 kubelet[2819]: E1216 09:45:25.837692 2819 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:25.837978 kubelet[2819]: E1216 09:45:25.837907 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5444bbcc64-nclhs" Dec 16 09:45:25.837978 kubelet[2819]: E1216 09:45:25.837933 2819 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5444bbcc64-nclhs" Dec 16 09:45:25.838069 kubelet[2819]: E1216 09:45:25.837996 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5444bbcc64-nclhs_calico-apiserver(95596b43-8688-4f06-b103-26a3c0d961e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5444bbcc64-nclhs_calico-apiserver(95596b43-8688-4f06-b103-26a3c0d961e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5444bbcc64-nclhs" podUID="95596b43-8688-4f06-b103-26a3c0d961e8" Dec 16 09:45:26.507787 kubelet[2819]: I1216 09:45:26.507750 2819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:26.510750 kubelet[2819]: I1216 09:45:26.510294 2819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:26.511496 containerd[1492]: time="2024-12-16T09:45:26.511422534Z" level=info msg="StopPodSandbox for \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\"" Dec 16 09:45:26.513947 containerd[1492]: time="2024-12-16T09:45:26.513892808Z" level=info msg="Ensure that sandbox a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e in task-service has been cleanup successfully" Dec 16 09:45:26.514664 containerd[1492]: time="2024-12-16T09:45:26.514623523Z" level=info msg="StopPodSandbox for \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\"" Dec 16 09:45:26.514808 containerd[1492]: time="2024-12-16T09:45:26.514776439Z" level=info msg="Ensure that sandbox 56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d in task-service has been cleanup successfully" Dec 16 09:45:26.518733 kubelet[2819]: I1216 09:45:26.518466 2819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:26.520025 containerd[1492]: time="2024-12-16T09:45:26.519993874Z" level=info msg="StopPodSandbox for \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\"" Dec 16 09:45:26.520336 containerd[1492]: time="2024-12-16T09:45:26.520155266Z" level=info msg="Ensure that sandbox 845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696 in task-service has been cleanup successfully" Dec 16 09:45:26.522024 kubelet[2819]: I1216 09:45:26.521431 2819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:26.523188 containerd[1492]: time="2024-12-16T09:45:26.523151072Z" level=info msg="StopPodSandbox for \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\"" Dec 16 09:45:26.525431 kubelet[2819]: I1216 09:45:26.525118 2819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:26.526270 containerd[1492]: time="2024-12-16T09:45:26.526248908Z" level=info msg="StopPodSandbox for \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\"" Dec 16 09:45:26.526822 containerd[1492]: time="2024-12-16T09:45:26.526710220Z" level=info msg="Ensure that sandbox c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9 in task-service has been cleanup successfully" Dec 16 09:45:26.527566 containerd[1492]: time="2024-12-16T09:45:26.527543286Z" level=info msg="Ensure that sandbox 1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0 in task-service has been cleanup successfully" Dec 16 09:45:26.531609 kubelet[2819]: I1216 09:45:26.531550 2819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:26.533191 containerd[1492]: time="2024-12-16T09:45:26.532943924Z" level=info msg="StopPodSandbox for \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\"" Dec 16 09:45:26.534115 containerd[1492]: time="2024-12-16T09:45:26.534091348Z" level=info msg="Ensure that sandbox 119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88 in task-service has been cleanup successfully" Dec 16 09:45:26.592591 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88-shm.mount: Deactivated successfully. Dec 16 09:45:26.592776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0-shm.mount: Deactivated successfully. Dec 16 09:45:26.592844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e-shm.mount: Deactivated successfully. Dec 16 09:45:26.592910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696-shm.mount: Deactivated successfully. Dec 16 09:45:26.609842 containerd[1492]: time="2024-12-16T09:45:26.609159099Z" level=error msg="StopPodSandbox for \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\" failed" error="failed to destroy network for sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:26.610455 containerd[1492]: time="2024-12-16T09:45:26.609218159Z" level=error msg="StopPodSandbox for \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\" failed" error="failed to destroy network for sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:26.610820 kubelet[2819]: E1216 09:45:26.610681 2819 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:26.610820 kubelet[2819]: E1216 09:45:26.610733 2819 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696"} Dec 16 09:45:26.611192 kubelet[2819]: E1216 09:45:26.610944 2819 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:26.611192 kubelet[2819]: E1216 09:45:26.611095 2819 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e"} Dec 16 09:45:26.611192 kubelet[2819]: E1216 09:45:26.611122 2819 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"001a7c98-2357-4161-a94b-51529e6ad491\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:45:26.611192 kubelet[2819]: E1216 09:45:26.611154 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"001a7c98-2357-4161-a94b-51529e6ad491\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rcgjq" podUID="001a7c98-2357-4161-a94b-51529e6ad491" Dec 16 09:45:26.611729 containerd[1492]: time="2024-12-16T09:45:26.611048548Z" level=error msg="StopPodSandbox for \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\" failed" error="failed to destroy network for sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:26.611823 kubelet[2819]: E1216 09:45:26.611497 2819 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a796b60-38ab-4508-af1f-addf8c793894\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:45:26.611823 kubelet[2819]: E1216 09:45:26.611529 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a796b60-38ab-4508-af1f-addf8c793894\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gcrd7" podUID="7a796b60-38ab-4508-af1f-addf8c793894" Dec 16 09:45:26.611823 kubelet[2819]: E1216 09:45:26.611625 2819 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:26.611823 kubelet[2819]: E1216 09:45:26.611644 2819 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d"} Dec 16 09:45:26.612237 kubelet[2819]: E1216 09:45:26.611663 2819 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a2c894e-6c25-40a0-9a45-3b04bb3fe827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:45:26.612237 kubelet[2819]: E1216 09:45:26.611918 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a2c894e-6c25-40a0-9a45-3b04bb3fe827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l5q54" podUID="4a2c894e-6c25-40a0-9a45-3b04bb3fe827" Dec 16 09:45:26.612643 containerd[1492]: time="2024-12-16T09:45:26.612605687Z" level=error msg="StopPodSandbox for \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\" failed" error="failed to destroy network for sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:26.613400 kubelet[2819]: E1216 09:45:26.612915 2819 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:26.613400 kubelet[2819]: E1216 09:45:26.612939 2819 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0"} Dec 16 09:45:26.613400 kubelet[2819]: E1216 09:45:26.612974 2819 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"165fee54-0a8a-430c-9a31-1d0a39e3bdfc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:45:26.613400 kubelet[2819]: E1216 09:45:26.612989 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"165fee54-0a8a-430c-9a31-1d0a39e3bdfc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c8fd9447d-sjkp7" podUID="165fee54-0a8a-430c-9a31-1d0a39e3bdfc" Dec 16 09:45:26.618581 containerd[1492]: time="2024-12-16T09:45:26.617610656Z" level=error msg="StopPodSandbox for \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\" failed" error="failed to destroy network for sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:26.618636 kubelet[2819]: E1216 09:45:26.617745 2819 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:26.618636 kubelet[2819]: E1216 09:45:26.617768 2819 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88"} Dec 16 09:45:26.618636 kubelet[2819]: E1216 09:45:26.617787 2819 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2776468d-7227-4930-9fc3-99816559fe3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:45:26.618636 kubelet[2819]: E1216 09:45:26.617802 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2776468d-7227-4930-9fc3-99816559fe3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5444bbcc64-vwb7k" podUID="2776468d-7227-4930-9fc3-99816559fe3c" Dec 16 09:45:26.620649 containerd[1492]: time="2024-12-16T09:45:26.620619206Z" level=error msg="StopPodSandbox for \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\" failed" error="failed to destroy network for sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:45:26.620809 kubelet[2819]: E1216 09:45:26.620783 2819 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:26.620809 kubelet[2819]: E1216 09:45:26.620808 2819 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9"} Dec 16 09:45:26.620886 kubelet[2819]: E1216 09:45:26.620826 2819 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95596b43-8688-4f06-b103-26a3c0d961e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:45:26.620886 kubelet[2819]: E1216 09:45:26.620841 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95596b43-8688-4f06-b103-26a3c0d961e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5444bbcc64-nclhs" podUID="95596b43-8688-4f06-b103-26a3c0d961e8" Dec 16 09:45:33.238810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701858975.mount: Deactivated successfully. Dec 16 09:45:33.317669 containerd[1492]: time="2024-12-16T09:45:33.300036340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 16 09:45:33.318915 containerd[1492]: time="2024-12-16T09:45:33.318872851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:33.331007 containerd[1492]: time="2024-12-16T09:45:33.330950741Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:33.331674 containerd[1492]: time="2024-12-16T09:45:33.331548839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.825352137s" Dec 16 09:45:33.331674 containerd[1492]: time="2024-12-16T09:45:33.331581510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 16 09:45:33.332284 containerd[1492]: time="2024-12-16T09:45:33.332259196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:33.457069 containerd[1492]: time="2024-12-16T09:45:33.457012566Z" level=info msg="CreateContainer within sandbox \"89213f0e53bed3cf618b3e2a009011813d5c28e628ba134638e9c8c478daf851\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 09:45:33.517237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992402547.mount: Deactivated successfully. Dec 16 09:45:33.542835 containerd[1492]: time="2024-12-16T09:45:33.542788727Z" level=info msg="CreateContainer within sandbox \"89213f0e53bed3cf618b3e2a009011813d5c28e628ba134638e9c8c478daf851\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dcb7364a60e8a8fa7c41c3f956d289a42e3b09cd72e9814c8d187a4e0d65ad1f\"" Dec 16 09:45:33.547706 containerd[1492]: time="2024-12-16T09:45:33.547456019Z" level=info msg="StartContainer for \"dcb7364a60e8a8fa7c41c3f956d289a42e3b09cd72e9814c8d187a4e0d65ad1f\"" Dec 16 09:45:33.628483 systemd[1]: Started cri-containerd-dcb7364a60e8a8fa7c41c3f956d289a42e3b09cd72e9814c8d187a4e0d65ad1f.scope - libcontainer container dcb7364a60e8a8fa7c41c3f956d289a42e3b09cd72e9814c8d187a4e0d65ad1f. Dec 16 09:45:33.680032 containerd[1492]: time="2024-12-16T09:45:33.679851108Z" level=info msg="StartContainer for \"dcb7364a60e8a8fa7c41c3f956d289a42e3b09cd72e9814c8d187a4e0d65ad1f\" returns successfully" Dec 16 09:45:33.775993 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 09:45:33.778060 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 09:45:35.402379 kernel: bpftool[4096]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 16 09:45:35.644422 systemd-networkd[1396]: vxlan.calico: Link UP Dec 16 09:45:35.644431 systemd-networkd[1396]: vxlan.calico: Gained carrier Dec 16 09:45:36.931530 systemd-networkd[1396]: vxlan.calico: Gained IPv6LL Dec 16 09:45:38.385065 containerd[1492]: time="2024-12-16T09:45:38.384999260Z" level=info msg="StopPodSandbox for \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\"" Dec 16 09:45:38.465429 kubelet[2819]: I1216 09:45:38.463983 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xw5bf" podStartSLOduration=6.338471196 podStartE2EDuration="25.451464166s" podCreationTimestamp="2024-12-16 09:45:13 +0000 UTC" firstStartedPulling="2024-12-16 09:45:14.244623642 +0000 UTC m=+20.957928116" lastFinishedPulling="2024-12-16 09:45:33.357616612 +0000 UTC m=+40.070921086" observedRunningTime="2024-12-16 09:45:34.580987538 +0000 UTC m=+41.294292041" watchObservedRunningTime="2024-12-16 09:45:38.451464166 +0000 UTC m=+45.164768649" Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.451 [INFO][4229] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.453 [INFO][4229] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" iface="eth0" netns="/var/run/netns/cni-4df8f9d0-61fa-6bb0-3f88-f6379ba4a252" Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.454 [INFO][4229] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" iface="eth0" netns="/var/run/netns/cni-4df8f9d0-61fa-6bb0-3f88-f6379ba4a252" Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.455 [INFO][4229] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" iface="eth0" netns="/var/run/netns/cni-4df8f9d0-61fa-6bb0-3f88-f6379ba4a252" Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.455 [INFO][4229] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.455 [INFO][4229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.597 [INFO][4235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" HandleID="k8s-pod-network.1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.601 [INFO][4235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.601 [INFO][4235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.612 [WARNING][4235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" HandleID="k8s-pod-network.1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.612 [INFO][4235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" HandleID="k8s-pod-network.1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.613 [INFO][4235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:38.619123 containerd[1492]: 2024-12-16 09:45:38.615 [INFO][4229] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:38.623399 systemd[1]: run-netns-cni\x2d4df8f9d0\x2d61fa\x2d6bb0\x2d3f88\x2df6379ba4a252.mount: Deactivated successfully. Dec 16 09:45:38.626171 containerd[1492]: time="2024-12-16T09:45:38.626128897Z" level=info msg="TearDown network for sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\" successfully" Dec 16 09:45:38.626171 containerd[1492]: time="2024-12-16T09:45:38.626162550Z" level=info msg="StopPodSandbox for \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\" returns successfully" Dec 16 09:45:38.626974 containerd[1492]: time="2024-12-16T09:45:38.626943329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8fd9447d-sjkp7,Uid:165fee54-0a8a-430c-9a31-1d0a39e3bdfc,Namespace:calico-system,Attempt:1,}" Dec 16 09:45:38.748067 systemd-networkd[1396]: califd42ab9d8d6: Link UP Dec 16 09:45:38.749623 systemd-networkd[1396]: califd42ab9d8d6: Gained carrier Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.679 [INFO][4246] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0 calico-kube-controllers-c8fd9447d- calico-system 165fee54-0a8a-430c-9a31-1d0a39e3bdfc 778 0 2024-12-16 09:45:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c8fd9447d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-e-12e77f9037 calico-kube-controllers-c8fd9447d-sjkp7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califd42ab9d8d6 [] []}} ContainerID="a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" Namespace="calico-system" Pod="calico-kube-controllers-c8fd9447d-sjkp7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.679 [INFO][4246] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" Namespace="calico-system" Pod="calico-kube-controllers-c8fd9447d-sjkp7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.708 [INFO][4253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" HandleID="k8s-pod-network.a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.716 [INFO][4253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" HandleID="k8s-pod-network.a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042c630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-e-12e77f9037", "pod":"calico-kube-controllers-c8fd9447d-sjkp7", "timestamp":"2024-12-16 09:45:38.708828347 +0000 UTC"}, Hostname:"ci-4081-2-1-e-12e77f9037", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.716 [INFO][4253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.716 [INFO][4253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.716 [INFO][4253] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-12e77f9037' Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.718 [INFO][4253] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.725 [INFO][4253] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.728 [INFO][4253] ipam/ipam.go 489: Trying affinity for 192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.729 [INFO][4253] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.731 [INFO][4253] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.731 [INFO][4253] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.732 [INFO][4253] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.735 [INFO][4253] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.741 [INFO][4253] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.193/26] block=192.168.74.192/26 handle="k8s-pod-network.a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.741 [INFO][4253] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.193/26] handle="k8s-pod-network.a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:38.767418 containerd[1492]: 2024-12-16 09:45:38.741 [INFO][4253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:38.769721 containerd[1492]: 2024-12-16 09:45:38.741 [INFO][4253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.193/26] IPv6=[] ContainerID="a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" HandleID="k8s-pod-network.a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:38.769721 containerd[1492]: 2024-12-16 09:45:38.745 [INFO][4246] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" Namespace="calico-system" Pod="calico-kube-controllers-c8fd9447d-sjkp7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0", GenerateName:"calico-kube-controllers-c8fd9447d-", Namespace:"calico-system", SelfLink:"", UID:"165fee54-0a8a-430c-9a31-1d0a39e3bdfc", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8fd9447d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"", Pod:"calico-kube-controllers-c8fd9447d-sjkp7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd42ab9d8d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:38.769721 containerd[1492]: 2024-12-16 09:45:38.745 [INFO][4246] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.193/32] ContainerID="a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" Namespace="calico-system" Pod="calico-kube-controllers-c8fd9447d-sjkp7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:38.769721 containerd[1492]: 2024-12-16 09:45:38.745 [INFO][4246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd42ab9d8d6 ContainerID="a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" Namespace="calico-system" Pod="calico-kube-controllers-c8fd9447d-sjkp7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:38.769721 containerd[1492]: 2024-12-16 09:45:38.749 [INFO][4246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" Namespace="calico-system" Pod="calico-kube-controllers-c8fd9447d-sjkp7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:38.771762 containerd[1492]: 2024-12-16 09:45:38.749 [INFO][4246] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" Namespace="calico-system" Pod="calico-kube-controllers-c8fd9447d-sjkp7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0", GenerateName:"calico-kube-controllers-c8fd9447d-", Namespace:"calico-system", SelfLink:"", UID:"165fee54-0a8a-430c-9a31-1d0a39e3bdfc", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8fd9447d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b", Pod:"calico-kube-controllers-c8fd9447d-sjkp7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd42ab9d8d6", MAC:"b2:0f:5b:24:aa:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:38.771762 containerd[1492]: 2024-12-16 09:45:38.760 [INFO][4246] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b" Namespace="calico-system" Pod="calico-kube-controllers-c8fd9447d-sjkp7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:38.796122 containerd[1492]: time="2024-12-16T09:45:38.796016388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:45:38.796122 containerd[1492]: time="2024-12-16T09:45:38.796073044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:45:38.796379 containerd[1492]: time="2024-12-16T09:45:38.796097610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:38.797047 containerd[1492]: time="2024-12-16T09:45:38.796964711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:38.820463 systemd[1]: Started cri-containerd-a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b.scope - libcontainer container a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b. Dec 16 09:45:38.857851 containerd[1492]: time="2024-12-16T09:45:38.857768625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8fd9447d-sjkp7,Uid:165fee54-0a8a-430c-9a31-1d0a39e3bdfc,Namespace:calico-system,Attempt:1,} returns sandbox id \"a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b\"" Dec 16 09:45:38.859568 containerd[1492]: time="2024-12-16T09:45:38.859444748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 16 09:45:39.387197 containerd[1492]: time="2024-12-16T09:45:39.386617292Z" level=info msg="StopPodSandbox for \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\"" Dec 16 09:45:39.387197 containerd[1492]: time="2024-12-16T09:45:39.386955625Z" level=info msg="StopPodSandbox for \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\"" Dec 16 09:45:39.388943 containerd[1492]: time="2024-12-16T09:45:39.388918946Z" level=info msg="StopPodSandbox for \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\"" Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.466 [INFO][4355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.468 [INFO][4355] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" iface="eth0" netns="/var/run/netns/cni-78fe4806-1f6b-244f-c7c3-7ca428d971f5" Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.472 [INFO][4355] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" iface="eth0" netns="/var/run/netns/cni-78fe4806-1f6b-244f-c7c3-7ca428d971f5" Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.473 [INFO][4355] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" iface="eth0" netns="/var/run/netns/cni-78fe4806-1f6b-244f-c7c3-7ca428d971f5" Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.474 [INFO][4355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.474 [INFO][4355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.503 [INFO][4374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" HandleID="k8s-pod-network.c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.503 [INFO][4374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.503 [INFO][4374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.510 [WARNING][4374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" HandleID="k8s-pod-network.c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.510 [INFO][4374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" HandleID="k8s-pod-network.c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.511 [INFO][4374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:39.518264 containerd[1492]: 2024-12-16 09:45:39.514 [INFO][4355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:39.519257 containerd[1492]: time="2024-12-16T09:45:39.519231354Z" level=info msg="TearDown network for sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\" successfully" Dec 16 09:45:39.519339 containerd[1492]: time="2024-12-16T09:45:39.519325140Z" level=info msg="StopPodSandbox for \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\" returns successfully" Dec 16 09:45:39.520069 containerd[1492]: time="2024-12-16T09:45:39.520047740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5444bbcc64-nclhs,Uid:95596b43-8688-4f06-b103-26a3c0d961e8,Namespace:calico-apiserver,Attempt:1,}" Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.467 [INFO][4344] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.468 [INFO][4344] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" iface="eth0" netns="/var/run/netns/cni-ec2a3d26-948d-d902-941c-e6c750dddb39" Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.468 [INFO][4344] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" iface="eth0" netns="/var/run/netns/cni-ec2a3d26-948d-d902-941c-e6c750dddb39" Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.469 [INFO][4344] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" iface="eth0" netns="/var/run/netns/cni-ec2a3d26-948d-d902-941c-e6c750dddb39" Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.469 [INFO][4344] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.469 [INFO][4344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.517 [INFO][4370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" HandleID="k8s-pod-network.119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.517 [INFO][4370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.517 [INFO][4370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.523 [WARNING][4370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" HandleID="k8s-pod-network.119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.523 [INFO][4370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" HandleID="k8s-pod-network.119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.525 [INFO][4370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:39.535971 containerd[1492]: 2024-12-16 09:45:39.530 [INFO][4344] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:39.537097 containerd[1492]: time="2024-12-16T09:45:39.536506205Z" level=info msg="TearDown network for sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\" successfully" Dec 16 09:45:39.537097 containerd[1492]: time="2024-12-16T09:45:39.536528497Z" level=info msg="StopPodSandbox for \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\" returns successfully" Dec 16 09:45:39.538985 containerd[1492]: time="2024-12-16T09:45:39.538912984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5444bbcc64-vwb7k,Uid:2776468d-7227-4930-9fc3-99816559fe3c,Namespace:calico-apiserver,Attempt:1,}" Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.467 [INFO][4359] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.470 [INFO][4359] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" iface="eth0" netns="/var/run/netns/cni-ffd468ec-0317-9573-6161-bb7daf597fbd" Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.473 [INFO][4359] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" iface="eth0" netns="/var/run/netns/cni-ffd468ec-0317-9573-6161-bb7daf597fbd" Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.473 [INFO][4359] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" iface="eth0" netns="/var/run/netns/cni-ffd468ec-0317-9573-6161-bb7daf597fbd" Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.473 [INFO][4359] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.473 [INFO][4359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.521 [INFO][4375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" HandleID="k8s-pod-network.56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.521 [INFO][4375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.526 [INFO][4375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.536 [WARNING][4375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" HandleID="k8s-pod-network.56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.537 [INFO][4375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" HandleID="k8s-pod-network.56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.540 [INFO][4375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:39.548119 containerd[1492]: 2024-12-16 09:45:39.544 [INFO][4359] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:39.549446 containerd[1492]: time="2024-12-16T09:45:39.548259356Z" level=info msg="TearDown network for sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\" successfully" Dec 16 09:45:39.549446 containerd[1492]: time="2024-12-16T09:45:39.548283250Z" level=info msg="StopPodSandbox for \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\" returns successfully" Dec 16 09:45:39.549446 containerd[1492]: time="2024-12-16T09:45:39.548822497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5q54,Uid:4a2c894e-6c25-40a0-9a45-3b04bb3fe827,Namespace:calico-system,Attempt:1,}" Dec 16 09:45:39.630468 systemd[1]: run-netns-cni\x2d78fe4806\x2d1f6b\x2d244f\x2dc7c3\x2d7ca428d971f5.mount: Deactivated successfully. Dec 16 09:45:39.630563 systemd[1]: run-netns-cni\x2dec2a3d26\x2d948d\x2dd902\x2d941c\x2de6c750dddb39.mount: Deactivated successfully. Dec 16 09:45:39.630627 systemd[1]: run-netns-cni\x2dffd468ec\x2d0317\x2d9573\x2d6161\x2dbb7daf597fbd.mount: Deactivated successfully. Dec 16 09:45:39.708798 systemd-networkd[1396]: cali482a961be6d: Link UP Dec 16 09:45:39.710882 systemd-networkd[1396]: cali482a961be6d: Gained carrier Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.584 [INFO][4390] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0 calico-apiserver-5444bbcc64- calico-apiserver 95596b43-8688-4f06-b103-26a3c0d961e8 788 0 2024-12-16 09:45:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5444bbcc64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-e-12e77f9037 calico-apiserver-5444bbcc64-nclhs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali482a961be6d [] []}} ContainerID="436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-nclhs" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.584 [INFO][4390] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-nclhs" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.653 [INFO][4411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" HandleID="k8s-pod-network.436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.669 [INFO][4411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" HandleID="k8s-pod-network.436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011d130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-e-12e77f9037", "pod":"calico-apiserver-5444bbcc64-nclhs", "timestamp":"2024-12-16 09:45:39.653926746 +0000 UTC"}, Hostname:"ci-4081-2-1-e-12e77f9037", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.669 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.669 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.669 [INFO][4411] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-12e77f9037' Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.670 [INFO][4411] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.674 [INFO][4411] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.681 [INFO][4411] ipam/ipam.go 489: Trying affinity for 192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.683 [INFO][4411] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.685 [INFO][4411] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.685 [INFO][4411] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.690 [INFO][4411] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.695 [INFO][4411] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.702 [INFO][4411] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.194/26] block=192.168.74.192/26 handle="k8s-pod-network.436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.702 [INFO][4411] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.194/26] handle="k8s-pod-network.436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.732188 containerd[1492]: 2024-12-16 09:45:39.702 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:39.732724 containerd[1492]: 2024-12-16 09:45:39.703 [INFO][4411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.194/26] IPv6=[] ContainerID="436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" HandleID="k8s-pod-network.436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:39.732724 containerd[1492]: 2024-12-16 09:45:39.706 [INFO][4390] cni-plugin/k8s.go 386: Populated endpoint ContainerID="436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-nclhs" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0", GenerateName:"calico-apiserver-5444bbcc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"95596b43-8688-4f06-b103-26a3c0d961e8", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5444bbcc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"", Pod:"calico-apiserver-5444bbcc64-nclhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali482a961be6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:39.732724 containerd[1492]: 2024-12-16 09:45:39.706 [INFO][4390] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.194/32] ContainerID="436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-nclhs" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:39.732724 containerd[1492]: 2024-12-16 09:45:39.707 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali482a961be6d ContainerID="436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-nclhs" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:39.732724 containerd[1492]: 2024-12-16 09:45:39.709 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-nclhs" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:39.733321 containerd[1492]: 2024-12-16 09:45:39.710 [INFO][4390] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-nclhs" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0", GenerateName:"calico-apiserver-5444bbcc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"95596b43-8688-4f06-b103-26a3c0d961e8", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5444bbcc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e", Pod:"calico-apiserver-5444bbcc64-nclhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali482a961be6d", MAC:"f6:0a:36:68:08:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:39.733321 containerd[1492]: 2024-12-16 09:45:39.726 [INFO][4390] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-nclhs" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:39.768055 systemd-networkd[1396]: cali9e7320e5510: Link UP Dec 16 09:45:39.769111 systemd-networkd[1396]: cali9e7320e5510: Gained carrier Dec 16 09:45:39.795781 containerd[1492]: time="2024-12-16T09:45:39.794779708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:45:39.795781 containerd[1492]: time="2024-12-16T09:45:39.794822448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:45:39.795781 containerd[1492]: time="2024-12-16T09:45:39.794845351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:39.795781 containerd[1492]: time="2024-12-16T09:45:39.794927143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.640 [INFO][4413] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0 csi-node-driver- calico-system 4a2c894e-6c25-40a0-9a45-3b04bb3fe827 789 0 2024-12-16 09:45:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-e-12e77f9037 csi-node-driver-l5q54 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9e7320e5510 [] []}} ContainerID="e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" Namespace="calico-system" Pod="csi-node-driver-l5q54" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.640 [INFO][4413] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" Namespace="calico-system" Pod="csi-node-driver-l5q54" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.691 [INFO][4430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" HandleID="k8s-pod-network.e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.700 [INFO][4430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" HandleID="k8s-pod-network.e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4290), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-e-12e77f9037", "pod":"csi-node-driver-l5q54", "timestamp":"2024-12-16 09:45:39.69155886 +0000 UTC"}, Hostname:"ci-4081-2-1-e-12e77f9037", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.700 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.702 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.703 [INFO][4430] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-12e77f9037' Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.704 [INFO][4430] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.709 [INFO][4430] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.719 [INFO][4430] ipam/ipam.go 489: Trying affinity for 192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.723 [INFO][4430] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.729 [INFO][4430] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.729 [INFO][4430] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.733 [INFO][4430] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.738 [INFO][4430] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.746 [INFO][4430] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.195/26] block=192.168.74.192/26 handle="k8s-pod-network.e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.747 [INFO][4430] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.195/26] handle="k8s-pod-network.e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.747 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:39.808516 containerd[1492]: 2024-12-16 09:45:39.747 [INFO][4430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.195/26] IPv6=[] ContainerID="e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" HandleID="k8s-pod-network.e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:39.809216 containerd[1492]: 2024-12-16 09:45:39.759 [INFO][4413] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" Namespace="calico-system" Pod="csi-node-driver-l5q54" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a2c894e-6c25-40a0-9a45-3b04bb3fe827", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"", Pod:"csi-node-driver-l5q54", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e7320e5510", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:39.809216 containerd[1492]: 2024-12-16 09:45:39.760 [INFO][4413] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.195/32] ContainerID="e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" Namespace="calico-system" Pod="csi-node-driver-l5q54" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:39.809216 containerd[1492]: 2024-12-16 09:45:39.761 [INFO][4413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e7320e5510 ContainerID="e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" Namespace="calico-system" Pod="csi-node-driver-l5q54" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:39.809216 containerd[1492]: 2024-12-16 09:45:39.769 [INFO][4413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" Namespace="calico-system" Pod="csi-node-driver-l5q54" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:39.809216 containerd[1492]: 2024-12-16 09:45:39.771 [INFO][4413] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" Namespace="calico-system" Pod="csi-node-driver-l5q54" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a2c894e-6c25-40a0-9a45-3b04bb3fe827", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c", Pod:"csi-node-driver-l5q54", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e7320e5510", MAC:"de:4f:47:3d:98:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:39.809216 containerd[1492]: 2024-12-16 09:45:39.789 [INFO][4413] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c" Namespace="calico-system" Pod="csi-node-driver-l5q54" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:39.822832 systemd-networkd[1396]: cali77680fb9381: Link UP Dec 16 09:45:39.824189 systemd-networkd[1396]: cali77680fb9381: Gained carrier Dec 16 09:45:39.844016 systemd[1]: run-containerd-runc-k8s.io-436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e-runc.asjWyu.mount: Deactivated successfully. Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.657 [INFO][4401] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0 calico-apiserver-5444bbcc64- calico-apiserver 2776468d-7227-4930-9fc3-99816559fe3c 790 0 2024-12-16 09:45:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5444bbcc64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-e-12e77f9037 calico-apiserver-5444bbcc64-vwb7k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali77680fb9381 [] []}} ContainerID="d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-vwb7k" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.657 [INFO][4401] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-vwb7k" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.693 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" HandleID="k8s-pod-network.d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.703 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" HandleID="k8s-pod-network.d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bccb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-e-12e77f9037", "pod":"calico-apiserver-5444bbcc64-vwb7k", "timestamp":"2024-12-16 09:45:39.693606558 +0000 UTC"}, Hostname:"ci-4081-2-1-e-12e77f9037", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.703 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.747 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.747 [INFO][4434] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-12e77f9037' Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.755 [INFO][4434] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.763 [INFO][4434] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.774 [INFO][4434] ipam/ipam.go 489: Trying affinity for 192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.776 [INFO][4434] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.782 [INFO][4434] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.784 [INFO][4434] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.792 [INFO][4434] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843 Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.799 [INFO][4434] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.814 [INFO][4434] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.196/26] block=192.168.74.192/26 handle="k8s-pod-network.d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.814 [INFO][4434] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.196/26] handle="k8s-pod-network.d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:39.853081 containerd[1492]: 2024-12-16 09:45:39.814 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:39.854798 containerd[1492]: 2024-12-16 09:45:39.814 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.196/26] IPv6=[] ContainerID="d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" HandleID="k8s-pod-network.d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:39.854798 containerd[1492]: 2024-12-16 09:45:39.817 [INFO][4401] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-vwb7k" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0", GenerateName:"calico-apiserver-5444bbcc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"2776468d-7227-4930-9fc3-99816559fe3c", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5444bbcc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"", Pod:"calico-apiserver-5444bbcc64-vwb7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77680fb9381", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:39.854798 containerd[1492]: 2024-12-16 09:45:39.817 [INFO][4401] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.196/32] ContainerID="d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-vwb7k" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:39.854798 containerd[1492]: 2024-12-16 09:45:39.817 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77680fb9381 ContainerID="d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-vwb7k" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:39.854798 containerd[1492]: 2024-12-16 09:45:39.827 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-vwb7k" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:39.854540 systemd[1]: Started cri-containerd-436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e.scope - libcontainer container 436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e. Dec 16 09:45:39.855454 containerd[1492]: 2024-12-16 09:45:39.828 [INFO][4401] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-vwb7k" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0", GenerateName:"calico-apiserver-5444bbcc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"2776468d-7227-4930-9fc3-99816559fe3c", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5444bbcc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843", Pod:"calico-apiserver-5444bbcc64-vwb7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77680fb9381", MAC:"fe:df:7e:83:41:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:39.855454 containerd[1492]: 2024-12-16 09:45:39.841 [INFO][4401] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843" Namespace="calico-apiserver" Pod="calico-apiserver-5444bbcc64-vwb7k" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:39.868628 containerd[1492]: time="2024-12-16T09:45:39.866848344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:45:39.868628 containerd[1492]: time="2024-12-16T09:45:39.867720224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:45:39.868628 containerd[1492]: time="2024-12-16T09:45:39.867738808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:39.868628 containerd[1492]: time="2024-12-16T09:45:39.867854464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:39.901523 systemd[1]: Started cri-containerd-e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c.scope - libcontainer container e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c. Dec 16 09:45:39.909153 containerd[1492]: time="2024-12-16T09:45:39.909074586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:45:39.910074 containerd[1492]: time="2024-12-16T09:45:39.909995477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:45:39.910143 containerd[1492]: time="2024-12-16T09:45:39.910071930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:39.910407 containerd[1492]: time="2024-12-16T09:45:39.910329923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:39.946968 systemd[1]: Started cri-containerd-d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843.scope - libcontainer container d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843. Dec 16 09:45:39.950274 containerd[1492]: time="2024-12-16T09:45:39.950245155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5444bbcc64-nclhs,Uid:95596b43-8688-4f06-b103-26a3c0d961e8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e\"" Dec 16 09:45:39.953651 containerd[1492]: time="2024-12-16T09:45:39.953617018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5q54,Uid:4a2c894e-6c25-40a0-9a45-3b04bb3fe827,Namespace:calico-system,Attempt:1,} returns sandbox id \"e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c\"" Dec 16 09:45:39.994105 containerd[1492]: time="2024-12-16T09:45:39.994023709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5444bbcc64-vwb7k,Uid:2776468d-7227-4930-9fc3-99816559fe3c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843\"" Dec 16 09:45:40.067649 systemd-networkd[1396]: califd42ab9d8d6: Gained IPv6LL Dec 16 09:45:40.899616 systemd-networkd[1396]: cali9e7320e5510: Gained IPv6LL Dec 16 09:45:41.388591 containerd[1492]: time="2024-12-16T09:45:41.388543926Z" level=info msg="StopPodSandbox for \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\"" Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.442 [INFO][4627] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.442 [INFO][4627] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" iface="eth0" netns="/var/run/netns/cni-48b4c3f7-6192-a968-abf9-c0020d5b348d" Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.443 [INFO][4627] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" iface="eth0" netns="/var/run/netns/cni-48b4c3f7-6192-a968-abf9-c0020d5b348d" Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.444 [INFO][4627] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" iface="eth0" netns="/var/run/netns/cni-48b4c3f7-6192-a968-abf9-c0020d5b348d" Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.444 [INFO][4627] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.444 [INFO][4627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.480 [INFO][4634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" HandleID="k8s-pod-network.845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.480 [INFO][4634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.480 [INFO][4634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.485 [WARNING][4634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" HandleID="k8s-pod-network.845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.485 [INFO][4634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" HandleID="k8s-pod-network.845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.487 [INFO][4634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:41.491280 containerd[1492]: 2024-12-16 09:45:41.488 [INFO][4627] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:41.491280 containerd[1492]: time="2024-12-16T09:45:41.491193622Z" level=info msg="TearDown network for sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\" successfully" Dec 16 09:45:41.491280 containerd[1492]: time="2024-12-16T09:45:41.491219551Z" level=info msg="StopPodSandbox for \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\" returns successfully" Dec 16 09:45:41.494762 containerd[1492]: time="2024-12-16T09:45:41.494438147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gcrd7,Uid:7a796b60-38ab-4508-af1f-addf8c793894,Namespace:kube-system,Attempt:1,}" Dec 16 09:45:41.495739 systemd[1]: run-netns-cni\x2d48b4c3f7\x2d6192\x2da968\x2dabf9\x2dc0020d5b348d.mount: Deactivated successfully. Dec 16 09:45:41.539753 systemd-networkd[1396]: cali77680fb9381: Gained IPv6LL Dec 16 09:45:41.604712 systemd-networkd[1396]: cali482a961be6d: Gained IPv6LL Dec 16 09:45:41.632868 systemd-networkd[1396]: cali6b362ab9fbc: Link UP Dec 16 09:45:41.634340 systemd-networkd[1396]: cali6b362ab9fbc: Gained carrier Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.552 [INFO][4640] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0 coredns-7db6d8ff4d- kube-system 7a796b60-38ab-4508-af1f-addf8c793894 807 0 2024-12-16 09:45:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-e-12e77f9037 coredns-7db6d8ff4d-gcrd7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b362ab9fbc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gcrd7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.552 [INFO][4640] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gcrd7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.580 [INFO][4654] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" HandleID="k8s-pod-network.044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.589 [INFO][4654] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" HandleID="k8s-pod-network.044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031abe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-e-12e77f9037", "pod":"coredns-7db6d8ff4d-gcrd7", "timestamp":"2024-12-16 09:45:41.580544557 +0000 UTC"}, Hostname:"ci-4081-2-1-e-12e77f9037", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.590 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.590 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.590 [INFO][4654] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-12e77f9037' Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.592 [INFO][4654] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.597 [INFO][4654] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.603 [INFO][4654] ipam/ipam.go 489: Trying affinity for 192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.608 [INFO][4654] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.611 [INFO][4654] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.611 [INFO][4654] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.614 [INFO][4654] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.619 [INFO][4654] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.625 [INFO][4654] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.197/26] block=192.168.74.192/26 handle="k8s-pod-network.044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.625 [INFO][4654] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.197/26] handle="k8s-pod-network.044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.625 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:41.654886 containerd[1492]: 2024-12-16 09:45:41.626 [INFO][4654] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.197/26] IPv6=[] ContainerID="044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" HandleID="k8s-pod-network.044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:41.656030 containerd[1492]: 2024-12-16 09:45:41.628 [INFO][4640] cni-plugin/k8s.go 386: Populated endpoint ContainerID="044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gcrd7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a796b60-38ab-4508-af1f-addf8c793894", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"", Pod:"coredns-7db6d8ff4d-gcrd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b362ab9fbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:41.656030 containerd[1492]: 2024-12-16 09:45:41.628 [INFO][4640] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.197/32] ContainerID="044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gcrd7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:41.656030 containerd[1492]: 2024-12-16 09:45:41.628 [INFO][4640] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b362ab9fbc ContainerID="044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gcrd7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:41.656030 containerd[1492]: 2024-12-16 09:45:41.636 [INFO][4640] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gcrd7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:41.656171 containerd[1492]: 2024-12-16 09:45:41.637 [INFO][4640] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gcrd7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a796b60-38ab-4508-af1f-addf8c793894", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf", Pod:"coredns-7db6d8ff4d-gcrd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b362ab9fbc", MAC:"a6:1c:64:3d:6f:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:41.656171 containerd[1492]: 2024-12-16 09:45:41.651 [INFO][4640] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gcrd7" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:41.700905 containerd[1492]: time="2024-12-16T09:45:41.700296786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:45:41.700905 containerd[1492]: time="2024-12-16T09:45:41.700416861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:45:41.700905 containerd[1492]: time="2024-12-16T09:45:41.700442618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:41.700905 containerd[1492]: time="2024-12-16T09:45:41.700540142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:41.736475 systemd[1]: Started cri-containerd-044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf.scope - libcontainer container 044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf. Dec 16 09:45:41.737878 containerd[1492]: time="2024-12-16T09:45:41.737131065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:41.743451 containerd[1492]: time="2024-12-16T09:45:41.743414942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 16 09:45:41.744381 containerd[1492]: time="2024-12-16T09:45:41.744324022Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:41.747796 containerd[1492]: time="2024-12-16T09:45:41.747763000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:41.749080 containerd[1492]: time="2024-12-16T09:45:41.749034749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.889564332s" Dec 16 09:45:41.749399 containerd[1492]: time="2024-12-16T09:45:41.749380695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 16 09:45:41.751636 containerd[1492]: time="2024-12-16T09:45:41.751618328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 16 09:45:41.775297 containerd[1492]: time="2024-12-16T09:45:41.772717281Z" level=info msg="CreateContainer within sandbox \"a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 16 09:45:41.791949 containerd[1492]: time="2024-12-16T09:45:41.791871108Z" level=info msg="CreateContainer within sandbox \"a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac\"" Dec 16 09:45:41.793035 containerd[1492]: time="2024-12-16T09:45:41.792928284Z" level=info msg="StartContainer for \"332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac\"" Dec 16 09:45:41.816371 containerd[1492]: time="2024-12-16T09:45:41.815978705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gcrd7,Uid:7a796b60-38ab-4508-af1f-addf8c793894,Namespace:kube-system,Attempt:1,} returns sandbox id \"044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf\"" Dec 16 09:45:41.822087 containerd[1492]: time="2024-12-16T09:45:41.822035819Z" level=info msg="CreateContainer within sandbox \"044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 09:45:41.833514 systemd[1]: Started cri-containerd-332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac.scope - libcontainer container 332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac. Dec 16 09:45:41.842750 containerd[1492]: time="2024-12-16T09:45:41.842713744Z" level=info msg="CreateContainer within sandbox \"044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fff1ac1cf87b9015687db35b7fa6b9293d94ddfeeef0995e6a5cc2e34dae787b\"" Dec 16 09:45:41.844210 containerd[1492]: time="2024-12-16T09:45:41.843600672Z" level=info msg="StartContainer for \"fff1ac1cf87b9015687db35b7fa6b9293d94ddfeeef0995e6a5cc2e34dae787b\"" Dec 16 09:45:41.888515 systemd[1]: Started cri-containerd-fff1ac1cf87b9015687db35b7fa6b9293d94ddfeeef0995e6a5cc2e34dae787b.scope - libcontainer container fff1ac1cf87b9015687db35b7fa6b9293d94ddfeeef0995e6a5cc2e34dae787b. Dec 16 09:45:41.912215 containerd[1492]: time="2024-12-16T09:45:41.912075111Z" level=info msg="StartContainer for \"332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac\" returns successfully" Dec 16 09:45:41.932927 containerd[1492]: time="2024-12-16T09:45:41.932646047Z" level=info msg="StartContainer for \"fff1ac1cf87b9015687db35b7fa6b9293d94ddfeeef0995e6a5cc2e34dae787b\" returns successfully" Dec 16 09:45:42.385538 containerd[1492]: time="2024-12-16T09:45:42.385026901Z" level=info msg="StopPodSandbox for \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\"" Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.429 [INFO][4806] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.429 [INFO][4806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" iface="eth0" netns="/var/run/netns/cni-c8f6d111-c06f-22fb-b9f6-110fb8833564" Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.429 [INFO][4806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" iface="eth0" netns="/var/run/netns/cni-c8f6d111-c06f-22fb-b9f6-110fb8833564" Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.430 [INFO][4806] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" iface="eth0" netns="/var/run/netns/cni-c8f6d111-c06f-22fb-b9f6-110fb8833564" Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.430 [INFO][4806] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.430 [INFO][4806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.451 [INFO][4812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" HandleID="k8s-pod-network.a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.451 [INFO][4812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.451 [INFO][4812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.455 [WARNING][4812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" HandleID="k8s-pod-network.a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.455 [INFO][4812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" HandleID="k8s-pod-network.a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.457 [INFO][4812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:42.461657 containerd[1492]: 2024-12-16 09:45:42.459 [INFO][4806] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:42.463389 containerd[1492]: time="2024-12-16T09:45:42.461776966Z" level=info msg="TearDown network for sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\" successfully" Dec 16 09:45:42.463389 containerd[1492]: time="2024-12-16T09:45:42.461800599Z" level=info msg="StopPodSandbox for \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\" returns successfully" Dec 16 09:45:42.463389 containerd[1492]: time="2024-12-16T09:45:42.462383880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rcgjq,Uid:001a7c98-2357-4161-a94b-51529e6ad491,Namespace:kube-system,Attempt:1,}" Dec 16 09:45:42.561638 systemd-networkd[1396]: cali179639a9885: Link UP Dec 16 09:45:42.561851 systemd-networkd[1396]: cali179639a9885: Gained carrier Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.499 [INFO][4819] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0 coredns-7db6d8ff4d- kube-system 001a7c98-2357-4161-a94b-51529e6ad491 823 0 2024-12-16 09:45:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-e-12e77f9037 coredns-7db6d8ff4d-rcgjq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali179639a9885 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rcgjq" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.499 [INFO][4819] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rcgjq" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.524 [INFO][4829] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" HandleID="k8s-pod-network.b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.530 [INFO][4829] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" HandleID="k8s-pod-network.b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003190c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-e-12e77f9037", "pod":"coredns-7db6d8ff4d-rcgjq", "timestamp":"2024-12-16 09:45:42.524098516 +0000 UTC"}, Hostname:"ci-4081-2-1-e-12e77f9037", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.531 [INFO][4829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.531 [INFO][4829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.531 [INFO][4829] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-12e77f9037' Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.532 [INFO][4829] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.535 [INFO][4829] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.539 [INFO][4829] ipam/ipam.go 489: Trying affinity for 192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.540 [INFO][4829] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.542 [INFO][4829] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.542 [INFO][4829] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.544 [INFO][4829] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73 Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.547 [INFO][4829] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.552 [INFO][4829] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.198/26] block=192.168.74.192/26 handle="k8s-pod-network.b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.552 [INFO][4829] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.198/26] handle="k8s-pod-network.b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" host="ci-4081-2-1-e-12e77f9037" Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.552 [INFO][4829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:42.572940 containerd[1492]: 2024-12-16 09:45:42.552 [INFO][4829] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.198/26] IPv6=[] ContainerID="b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" HandleID="k8s-pod-network.b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:42.574206 containerd[1492]: 2024-12-16 09:45:42.557 [INFO][4819] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rcgjq" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"001a7c98-2357-4161-a94b-51529e6ad491", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"", Pod:"coredns-7db6d8ff4d-rcgjq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali179639a9885", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:42.574206 containerd[1492]: 2024-12-16 09:45:42.557 [INFO][4819] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.198/32] ContainerID="b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rcgjq" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:42.574206 containerd[1492]: 2024-12-16 09:45:42.557 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali179639a9885 ContainerID="b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rcgjq" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:42.574206 containerd[1492]: 2024-12-16 09:45:42.559 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rcgjq" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:42.574523 containerd[1492]: 2024-12-16 09:45:42.559 [INFO][4819] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rcgjq" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"001a7c98-2357-4161-a94b-51529e6ad491", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73", Pod:"coredns-7db6d8ff4d-rcgjq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali179639a9885", MAC:"ba:94:01:ed:5d:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:42.574523 containerd[1492]: 2024-12-16 09:45:42.568 [INFO][4819] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rcgjq" WorkloadEndpoint="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:42.628918 containerd[1492]: time="2024-12-16T09:45:42.628473897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:45:42.628918 containerd[1492]: time="2024-12-16T09:45:42.628533278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:45:42.628918 containerd[1492]: time="2024-12-16T09:45:42.628546072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:42.628918 containerd[1492]: time="2024-12-16T09:45:42.628628175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:42.655499 systemd[1]: Started cri-containerd-b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73.scope - libcontainer container b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73. Dec 16 09:45:42.664922 kubelet[2819]: I1216 09:45:42.664869 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c8fd9447d-sjkp7" podStartSLOduration=26.772219057 podStartE2EDuration="29.664852066s" podCreationTimestamp="2024-12-16 09:45:13 +0000 UTC" firstStartedPulling="2024-12-16 09:45:38.858885702 +0000 UTC m=+45.572190176" lastFinishedPulling="2024-12-16 09:45:41.751518712 +0000 UTC m=+48.464823185" observedRunningTime="2024-12-16 09:45:42.663294113 +0000 UTC m=+49.376598597" watchObservedRunningTime="2024-12-16 09:45:42.664852066 +0000 UTC m=+49.378156539" Dec 16 09:45:42.721178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794331901.mount: Deactivated successfully. Dec 16 09:45:42.722869 systemd[1]: run-netns-cni\x2dc8f6d111\x2dc06f\x2d22fb\x2db9f6\x2d110fb8833564.mount: Deactivated successfully. Dec 16 09:45:42.749590 containerd[1492]: time="2024-12-16T09:45:42.749506508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rcgjq,Uid:001a7c98-2357-4161-a94b-51529e6ad491,Namespace:kube-system,Attempt:1,} returns sandbox id \"b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73\"" Dec 16 09:45:42.754392 kubelet[2819]: I1216 09:45:42.753657 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gcrd7" podStartSLOduration=35.75364216 podStartE2EDuration="35.75364216s" podCreationTimestamp="2024-12-16 09:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:45:42.677283978 +0000 UTC m=+49.390588451" watchObservedRunningTime="2024-12-16 09:45:42.75364216 +0000 UTC m=+49.466946633" Dec 16 09:45:42.755063 containerd[1492]: time="2024-12-16T09:45:42.754975292Z" level=info msg="CreateContainer within sandbox \"b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 09:45:42.756159 systemd-networkd[1396]: cali6b362ab9fbc: Gained IPv6LL Dec 16 09:45:42.783335 containerd[1492]: time="2024-12-16T09:45:42.780173541Z" level=info msg="CreateContainer within sandbox \"b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d90c0fee5a973d1e1862481886ad0e4ca1ab8ff6ac26692ee83b6b069fa989b\"" Dec 16 09:45:42.783990 containerd[1492]: time="2024-12-16T09:45:42.783602882Z" level=info msg="StartContainer for \"0d90c0fee5a973d1e1862481886ad0e4ca1ab8ff6ac26692ee83b6b069fa989b\"" Dec 16 09:45:42.820481 systemd[1]: Started cri-containerd-0d90c0fee5a973d1e1862481886ad0e4ca1ab8ff6ac26692ee83b6b069fa989b.scope - libcontainer container 0d90c0fee5a973d1e1862481886ad0e4ca1ab8ff6ac26692ee83b6b069fa989b. Dec 16 09:45:42.846480 containerd[1492]: time="2024-12-16T09:45:42.846306246Z" level=info msg="StartContainer for \"0d90c0fee5a973d1e1862481886ad0e4ca1ab8ff6ac26692ee83b6b069fa989b\" returns successfully" Dec 16 09:45:43.647454 kubelet[2819]: I1216 09:45:43.647149 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rcgjq" podStartSLOduration=36.64713051 podStartE2EDuration="36.64713051s" podCreationTimestamp="2024-12-16 09:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:45:43.645394695 +0000 UTC m=+50.358699178" watchObservedRunningTime="2024-12-16 09:45:43.64713051 +0000 UTC m=+50.360434993" Dec 16 09:45:43.971588 systemd-networkd[1396]: cali179639a9885: Gained IPv6LL Dec 16 09:45:44.638954 containerd[1492]: time="2024-12-16T09:45:44.638732917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:44.641596 containerd[1492]: time="2024-12-16T09:45:44.640403251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 16 09:45:44.641596 containerd[1492]: time="2024-12-16T09:45:44.640444739Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:44.644173 containerd[1492]: time="2024-12-16T09:45:44.642982363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:44.644173 containerd[1492]: time="2024-12-16T09:45:44.643613031Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.891705483s" Dec 16 09:45:44.644173 containerd[1492]: time="2024-12-16T09:45:44.643633880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 16 09:45:44.646637 containerd[1492]: time="2024-12-16T09:45:44.646589877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 16 09:45:44.649019 containerd[1492]: time="2024-12-16T09:45:44.648710122Z" level=info msg="CreateContainer within sandbox \"436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 16 09:45:44.684149 containerd[1492]: time="2024-12-16T09:45:44.684097523Z" level=info msg="CreateContainer within sandbox \"436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bdd9333ec984392c0dfba0c4bf05e9662a566355699835f66408166122ef23df\"" Dec 16 09:45:44.685767 containerd[1492]: time="2024-12-16T09:45:44.685708807Z" level=info msg="StartContainer for \"bdd9333ec984392c0dfba0c4bf05e9662a566355699835f66408166122ef23df\"" Dec 16 09:45:44.736706 systemd[1]: Started cri-containerd-bdd9333ec984392c0dfba0c4bf05e9662a566355699835f66408166122ef23df.scope - libcontainer container bdd9333ec984392c0dfba0c4bf05e9662a566355699835f66408166122ef23df. Dec 16 09:45:44.799962 containerd[1492]: time="2024-12-16T09:45:44.799919892Z" level=info msg="StartContainer for \"bdd9333ec984392c0dfba0c4bf05e9662a566355699835f66408166122ef23df\" returns successfully" Dec 16 09:45:45.760402 kubelet[2819]: I1216 09:45:45.760315 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5444bbcc64-nclhs" podStartSLOduration=28.067739804 podStartE2EDuration="32.760300553s" podCreationTimestamp="2024-12-16 09:45:13 +0000 UTC" firstStartedPulling="2024-12-16 09:45:39.951915658 +0000 UTC m=+46.665220131" lastFinishedPulling="2024-12-16 09:45:44.644476397 +0000 UTC m=+51.357780880" observedRunningTime="2024-12-16 09:45:45.651549029 +0000 UTC m=+52.364853512" watchObservedRunningTime="2024-12-16 09:45:45.760300553 +0000 UTC m=+52.473605027" Dec 16 09:45:47.023427 containerd[1492]: time="2024-12-16T09:45:47.023380032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:47.027324 containerd[1492]: time="2024-12-16T09:45:47.027263213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 16 09:45:47.028864 containerd[1492]: time="2024-12-16T09:45:47.028779699Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:47.031930 containerd[1492]: time="2024-12-16T09:45:47.031897708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:47.032856 containerd[1492]: time="2024-12-16T09:45:47.032670243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.386042064s" Dec 16 09:45:47.032856 containerd[1492]: time="2024-12-16T09:45:47.032701541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 16 09:45:47.034064 containerd[1492]: time="2024-12-16T09:45:47.033942191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 16 09:45:47.036296 containerd[1492]: time="2024-12-16T09:45:47.036214521Z" level=info msg="CreateContainer within sandbox \"e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 16 09:45:47.058145 containerd[1492]: time="2024-12-16T09:45:47.058099813Z" level=info msg="CreateContainer within sandbox \"e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f56614c31e2afdd453596198508ca166c3e1502989555dd1c53a373c7704bf79\"" Dec 16 09:45:47.060074 containerd[1492]: time="2024-12-16T09:45:47.060053696Z" level=info msg="StartContainer for \"f56614c31e2afdd453596198508ca166c3e1502989555dd1c53a373c7704bf79\"" Dec 16 09:45:47.095476 systemd[1]: Started cri-containerd-f56614c31e2afdd453596198508ca166c3e1502989555dd1c53a373c7704bf79.scope - libcontainer container f56614c31e2afdd453596198508ca166c3e1502989555dd1c53a373c7704bf79. Dec 16 09:45:47.121324 containerd[1492]: time="2024-12-16T09:45:47.121235838Z" level=info msg="StartContainer for \"f56614c31e2afdd453596198508ca166c3e1502989555dd1c53a373c7704bf79\" returns successfully" Dec 16 09:45:47.937110 containerd[1492]: time="2024-12-16T09:45:47.937037971Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:47.952041 containerd[1492]: time="2024-12-16T09:45:47.951815213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 16 09:45:47.954855 containerd[1492]: time="2024-12-16T09:45:47.954797549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 920.829089ms" Dec 16 09:45:47.955017 containerd[1492]: time="2024-12-16T09:45:47.954945756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 16 09:45:47.956828 containerd[1492]: time="2024-12-16T09:45:47.956656936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 16 09:45:47.959980 containerd[1492]: time="2024-12-16T09:45:47.959935375Z" level=info msg="CreateContainer within sandbox \"d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 16 09:45:47.976204 containerd[1492]: time="2024-12-16T09:45:47.976050559Z" level=info msg="CreateContainer within sandbox \"d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"65a7b3f035fa9cdca0c848477d2f501187b20a394e1e36f9cb24a4d558e42191\"" Dec 16 09:45:47.979150 containerd[1492]: time="2024-12-16T09:45:47.977875271Z" level=info msg="StartContainer for \"65a7b3f035fa9cdca0c848477d2f501187b20a394e1e36f9cb24a4d558e42191\"" Dec 16 09:45:48.017406 systemd[1]: Started cri-containerd-65a7b3f035fa9cdca0c848477d2f501187b20a394e1e36f9cb24a4d558e42191.scope - libcontainer container 65a7b3f035fa9cdca0c848477d2f501187b20a394e1e36f9cb24a4d558e42191. Dec 16 09:45:48.075934 containerd[1492]: time="2024-12-16T09:45:48.075885069Z" level=info msg="StartContainer for \"65a7b3f035fa9cdca0c848477d2f501187b20a394e1e36f9cb24a4d558e42191\" returns successfully" Dec 16 09:45:49.655001 kubelet[2819]: I1216 09:45:49.654962 2819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 09:45:49.720796 containerd[1492]: time="2024-12-16T09:45:49.720723870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:49.721781 containerd[1492]: time="2024-12-16T09:45:49.721737996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 16 09:45:49.722888 containerd[1492]: time="2024-12-16T09:45:49.722847401Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:49.724764 containerd[1492]: time="2024-12-16T09:45:49.724741203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:45:49.725452 containerd[1492]: time="2024-12-16T09:45:49.725288485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.768601925s" Dec 16 09:45:49.725452 containerd[1492]: time="2024-12-16T09:45:49.725325014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 16 09:45:49.727327 containerd[1492]: time="2024-12-16T09:45:49.727295198Z" level=info msg="CreateContainer within sandbox \"e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 16 09:45:49.749728 containerd[1492]: time="2024-12-16T09:45:49.749673756Z" level=info msg="CreateContainer within sandbox \"e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a71dc9170b93238938171e39cad300ff57c168139c35054587c6ea9bf28a7b0d\"" Dec 16 09:45:49.750330 containerd[1492]: time="2024-12-16T09:45:49.750283826Z" level=info msg="StartContainer for \"a71dc9170b93238938171e39cad300ff57c168139c35054587c6ea9bf28a7b0d\"" Dec 16 09:45:49.781507 systemd[1]: Started cri-containerd-a71dc9170b93238938171e39cad300ff57c168139c35054587c6ea9bf28a7b0d.scope - libcontainer container a71dc9170b93238938171e39cad300ff57c168139c35054587c6ea9bf28a7b0d. Dec 16 09:45:49.823218 containerd[1492]: time="2024-12-16T09:45:49.822150446Z" level=info msg="StartContainer for \"a71dc9170b93238938171e39cad300ff57c168139c35054587c6ea9bf28a7b0d\" returns successfully" Dec 16 09:45:50.640153 kubelet[2819]: I1216 09:45:50.640116 2819 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 16 09:45:50.644477 kubelet[2819]: I1216 09:45:50.644401 2819 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 16 09:45:50.690424 kubelet[2819]: I1216 09:45:50.690314 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5444bbcc64-vwb7k" podStartSLOduration=29.729086227 podStartE2EDuration="37.690295211s" podCreationTimestamp="2024-12-16 09:45:13 +0000 UTC" firstStartedPulling="2024-12-16 09:45:39.995248369 +0000 UTC m=+46.708552841" lastFinishedPulling="2024-12-16 09:45:47.956457352 +0000 UTC m=+54.669761825" observedRunningTime="2024-12-16 09:45:48.664296683 +0000 UTC m=+55.377601156" watchObservedRunningTime="2024-12-16 09:45:50.690295211 +0000 UTC m=+57.403599694" Dec 16 09:45:50.690861 kubelet[2819]: I1216 09:45:50.690543 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l5q54" podStartSLOduration=27.919822704 podStartE2EDuration="37.69053564s" podCreationTimestamp="2024-12-16 09:45:13 +0000 UTC" firstStartedPulling="2024-12-16 09:45:39.955329248 +0000 UTC m=+46.668633722" lastFinishedPulling="2024-12-16 09:45:49.726042185 +0000 UTC m=+56.439346658" observedRunningTime="2024-12-16 09:45:50.688593749 +0000 UTC m=+57.401898252" watchObservedRunningTime="2024-12-16 09:45:50.69053564 +0000 UTC m=+57.403840114" Dec 16 09:45:53.099016 systemd[1]: run-containerd-runc-k8s.io-dcb7364a60e8a8fa7c41c3f956d289a42e3b09cd72e9814c8d187a4e0d65ad1f-runc.nqVKXc.mount: Deactivated successfully. Dec 16 09:45:53.426560 containerd[1492]: time="2024-12-16T09:45:53.426463870Z" level=info msg="StopPodSandbox for \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\"" Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.527 [WARNING][5168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a2c894e-6c25-40a0-9a45-3b04bb3fe827", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c", Pod:"csi-node-driver-l5q54", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e7320e5510", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.528 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.528 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" iface="eth0" netns="" Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.528 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.528 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.555 [INFO][5175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" HandleID="k8s-pod-network.56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.555 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.555 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.562 [WARNING][5175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" HandleID="k8s-pod-network.56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.562 [INFO][5175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" HandleID="k8s-pod-network.56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.565 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:53.573291 containerd[1492]: 2024-12-16 09:45:53.569 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:53.573291 containerd[1492]: time="2024-12-16T09:45:53.573055125Z" level=info msg="TearDown network for sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\" successfully" Dec 16 09:45:53.573291 containerd[1492]: time="2024-12-16T09:45:53.573078819Z" level=info msg="StopPodSandbox for \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\" returns successfully" Dec 16 09:45:53.614557 containerd[1492]: time="2024-12-16T09:45:53.614475891Z" level=info msg="RemovePodSandbox for \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\"" Dec 16 09:45:53.617786 containerd[1492]: time="2024-12-16T09:45:53.617749784Z" level=info msg="Forcibly stopping sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\"" Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.655 [WARNING][5193] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a2c894e-6c25-40a0-9a45-3b04bb3fe827", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"e2cf88deb61d5ae554dc18a27e46f2d9833db6849b695929e259aff68d53a23c", Pod:"csi-node-driver-l5q54", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e7320e5510", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.656 [INFO][5193] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.656 [INFO][5193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" iface="eth0" netns="" Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.656 [INFO][5193] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.656 [INFO][5193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.682 [INFO][5199] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" HandleID="k8s-pod-network.56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.682 [INFO][5199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.682 [INFO][5199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.689 [WARNING][5199] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" HandleID="k8s-pod-network.56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.689 [INFO][5199] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" HandleID="k8s-pod-network.56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Workload="ci--4081--2--1--e--12e77f9037-k8s-csi--node--driver--l5q54-eth0" Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.691 [INFO][5199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:53.697493 containerd[1492]: 2024-12-16 09:45:53.694 [INFO][5193] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d" Dec 16 09:45:53.697493 containerd[1492]: time="2024-12-16T09:45:53.696824680Z" level=info msg="TearDown network for sandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\" successfully" Dec 16 09:45:53.715201 containerd[1492]: time="2024-12-16T09:45:53.715151314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:45:53.723370 containerd[1492]: time="2024-12-16T09:45:53.723321314Z" level=info msg="RemovePodSandbox \"56a32c49ed489579793db889b42b4b72e0fd761a88f4ef286fd87e79877fd47d\" returns successfully" Dec 16 09:45:53.723985 containerd[1492]: time="2024-12-16T09:45:53.723922198Z" level=info msg="StopPodSandbox for \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\"" Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.768 [WARNING][5218] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0", GenerateName:"calico-apiserver-5444bbcc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"2776468d-7227-4930-9fc3-99816559fe3c", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5444bbcc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843", Pod:"calico-apiserver-5444bbcc64-vwb7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77680fb9381", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.768 [INFO][5218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.768 [INFO][5218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" iface="eth0" netns="" Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.768 [INFO][5218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.768 [INFO][5218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.788 [INFO][5227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" HandleID="k8s-pod-network.119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.788 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.788 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.793 [WARNING][5227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" HandleID="k8s-pod-network.119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.793 [INFO][5227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" HandleID="k8s-pod-network.119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.795 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:53.799600 containerd[1492]: 2024-12-16 09:45:53.797 [INFO][5218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:53.800746 containerd[1492]: time="2024-12-16T09:45:53.799611493Z" level=info msg="TearDown network for sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\" successfully" Dec 16 09:45:53.800746 containerd[1492]: time="2024-12-16T09:45:53.799633002Z" level=info msg="StopPodSandbox for \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\" returns successfully" Dec 16 09:45:53.800746 containerd[1492]: time="2024-12-16T09:45:53.800170588Z" level=info msg="RemovePodSandbox for \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\"" Dec 16 09:45:53.800746 containerd[1492]: time="2024-12-16T09:45:53.800202948Z" level=info msg="Forcibly stopping sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\"" Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.834 [WARNING][5246] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0", GenerateName:"calico-apiserver-5444bbcc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"2776468d-7227-4930-9fc3-99816559fe3c", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5444bbcc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"d00fe559e52738995856d7995cfc0372bd0d9313de74f8d395df4d3d88e61843", Pod:"calico-apiserver-5444bbcc64-vwb7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77680fb9381", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.835 [INFO][5246] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.835 [INFO][5246] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" iface="eth0" netns="" Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.835 [INFO][5246] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.835 [INFO][5246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.854 [INFO][5252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" HandleID="k8s-pod-network.119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.855 [INFO][5252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.855 [INFO][5252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.861 [WARNING][5252] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" HandleID="k8s-pod-network.119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.861 [INFO][5252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" HandleID="k8s-pod-network.119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--vwb7k-eth0" Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.862 [INFO][5252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:53.868330 containerd[1492]: 2024-12-16 09:45:53.865 [INFO][5246] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88" Dec 16 09:45:53.870513 containerd[1492]: time="2024-12-16T09:45:53.868764673Z" level=info msg="TearDown network for sandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\" successfully" Dec 16 09:45:53.872442 containerd[1492]: time="2024-12-16T09:45:53.872397177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:45:53.872509 containerd[1492]: time="2024-12-16T09:45:53.872463381Z" level=info msg="RemovePodSandbox \"119ca6c3e6f3aef2d4c73e95d9d347b6a9ca9695a774742a63640c762abd1e88\" returns successfully" Dec 16 09:45:53.873236 containerd[1492]: time="2024-12-16T09:45:53.872957706Z" level=info msg="StopPodSandbox for \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\"" Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.908 [WARNING][5270] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a796b60-38ab-4508-af1f-addf8c793894", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf", Pod:"coredns-7db6d8ff4d-gcrd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b362ab9fbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.908 [INFO][5270] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.908 [INFO][5270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" iface="eth0" netns="" Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.908 [INFO][5270] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.908 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.927 [INFO][5277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" HandleID="k8s-pod-network.845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.927 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.927 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.932 [WARNING][5277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" HandleID="k8s-pod-network.845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.932 [INFO][5277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" HandleID="k8s-pod-network.845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.933 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:53.938304 containerd[1492]: 2024-12-16 09:45:53.935 [INFO][5270] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:53.939487 containerd[1492]: time="2024-12-16T09:45:53.938320928Z" level=info msg="TearDown network for sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\" successfully" Dec 16 09:45:53.939487 containerd[1492]: time="2024-12-16T09:45:53.938364219Z" level=info msg="StopPodSandbox for \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\" returns successfully" Dec 16 09:45:53.939487 containerd[1492]: time="2024-12-16T09:45:53.938813259Z" level=info msg="RemovePodSandbox for \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\"" Dec 16 09:45:53.939487 containerd[1492]: time="2024-12-16T09:45:53.938835601Z" level=info msg="Forcibly stopping sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\"" Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:53.974 [WARNING][5295] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a796b60-38ab-4508-af1f-addf8c793894", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"044d78270f9018b35f34ec848ade3452522b836b391e5bf02641f92277edb2bf", Pod:"coredns-7db6d8ff4d-gcrd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b362ab9fbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:53.974 [INFO][5295] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:53.974 [INFO][5295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" iface="eth0" netns="" Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:53.974 [INFO][5295] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:53.974 [INFO][5295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:53.996 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" HandleID="k8s-pod-network.845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:53.996 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:53.996 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:54.001 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" HandleID="k8s-pod-network.845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:54.001 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" HandleID="k8s-pod-network.845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--gcrd7-eth0" Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:54.003 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:54.009115 containerd[1492]: 2024-12-16 09:45:54.005 [INFO][5295] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696" Dec 16 09:45:54.011611 containerd[1492]: time="2024-12-16T09:45:54.009436159Z" level=info msg="TearDown network for sandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\" successfully" Dec 16 09:45:54.031402 containerd[1492]: time="2024-12-16T09:45:54.031196616Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:45:54.031402 containerd[1492]: time="2024-12-16T09:45:54.031278530Z" level=info msg="RemovePodSandbox \"845fbd21ff0de61fed48c7e675bf93154637f68d7564c7c75ec16b5cc93c2696\" returns successfully" Dec 16 09:45:54.031874 containerd[1492]: time="2024-12-16T09:45:54.031831123Z" level=info msg="StopPodSandbox for \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\"" Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.084 [WARNING][5320] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"001a7c98-2357-4161-a94b-51529e6ad491", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73", Pod:"coredns-7db6d8ff4d-rcgjq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali179639a9885", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.084 [INFO][5320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.084 [INFO][5320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" iface="eth0" netns="" Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.084 [INFO][5320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.084 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.112 [INFO][5327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" HandleID="k8s-pod-network.a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.112 [INFO][5327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.112 [INFO][5327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.121 [WARNING][5327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" HandleID="k8s-pod-network.a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.121 [INFO][5327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" HandleID="k8s-pod-network.a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.123 [INFO][5327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:54.129799 containerd[1492]: 2024-12-16 09:45:54.126 [INFO][5320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:54.129799 containerd[1492]: time="2024-12-16T09:45:54.129793889Z" level=info msg="TearDown network for sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\" successfully" Dec 16 09:45:54.130325 containerd[1492]: time="2024-12-16T09:45:54.129818866Z" level=info msg="StopPodSandbox for \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\" returns successfully" Dec 16 09:45:54.130325 containerd[1492]: time="2024-12-16T09:45:54.130295247Z" level=info msg="RemovePodSandbox for \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\"" Dec 16 09:45:54.130325 containerd[1492]: time="2024-12-16T09:45:54.130319172Z" level=info msg="Forcibly stopping sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\"" Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.171 [WARNING][5345] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"001a7c98-2357-4161-a94b-51529e6ad491", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"b36b29ec184db7301300831e0544210275bd566382226c0e1fe0e4e2d3108d73", Pod:"coredns-7db6d8ff4d-rcgjq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali179639a9885", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.171 [INFO][5345] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.171 [INFO][5345] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" iface="eth0" netns="" Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.171 [INFO][5345] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.171 [INFO][5345] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.193 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" HandleID="k8s-pod-network.a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.194 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.194 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.199 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" HandleID="k8s-pod-network.a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.199 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" HandleID="k8s-pod-network.a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Workload="ci--4081--2--1--e--12e77f9037-k8s-coredns--7db6d8ff4d--rcgjq-eth0" Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.200 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:54.208716 containerd[1492]: 2024-12-16 09:45:54.204 [INFO][5345] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e" Dec 16 09:45:54.210674 containerd[1492]: time="2024-12-16T09:45:54.208980018Z" level=info msg="TearDown network for sandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\" successfully" Dec 16 09:45:54.214139 containerd[1492]: time="2024-12-16T09:45:54.214081959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:45:54.214139 containerd[1492]: time="2024-12-16T09:45:54.214137253Z" level=info msg="RemovePodSandbox \"a8090045dd98881061d3128ba9de127d5bedfb05033476c6b0543228810ba90e\" returns successfully" Dec 16 09:45:54.216268 containerd[1492]: time="2024-12-16T09:45:54.215050852Z" level=info msg="StopPodSandbox for \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\"" Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.262 [WARNING][5369] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0", GenerateName:"calico-apiserver-5444bbcc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"95596b43-8688-4f06-b103-26a3c0d961e8", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5444bbcc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e", Pod:"calico-apiserver-5444bbcc64-nclhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali482a961be6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.263 [INFO][5369] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.263 [INFO][5369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" iface="eth0" netns="" Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.263 [INFO][5369] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.263 [INFO][5369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.302 [INFO][5375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" HandleID="k8s-pod-network.c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.302 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.302 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.308 [WARNING][5375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" HandleID="k8s-pod-network.c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.309 [INFO][5375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" HandleID="k8s-pod-network.c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.311 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:54.318415 containerd[1492]: 2024-12-16 09:45:54.316 [INFO][5369] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:54.319381 containerd[1492]: time="2024-12-16T09:45:54.319080946Z" level=info msg="TearDown network for sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\" successfully" Dec 16 09:45:54.319381 containerd[1492]: time="2024-12-16T09:45:54.319142140Z" level=info msg="StopPodSandbox for \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\" returns successfully" Dec 16 09:45:54.319979 containerd[1492]: time="2024-12-16T09:45:54.319927068Z" level=info msg="RemovePodSandbox for \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\"" Dec 16 09:45:54.320155 containerd[1492]: time="2024-12-16T09:45:54.320031653Z" level=info msg="Forcibly stopping sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\"" Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.358 [WARNING][5395] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0", GenerateName:"calico-apiserver-5444bbcc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"95596b43-8688-4f06-b103-26a3c0d961e8", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5444bbcc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"436270bc5c03f45c2ae4561b73e3945cd95bc42a2bef6e3ad622548f583bab5e", Pod:"calico-apiserver-5444bbcc64-nclhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali482a961be6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.359 [INFO][5395] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.359 [INFO][5395] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" iface="eth0" netns="" Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.359 [INFO][5395] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.359 [INFO][5395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.383 [INFO][5402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" HandleID="k8s-pod-network.c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.384 [INFO][5402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.384 [INFO][5402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.394 [WARNING][5402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" HandleID="k8s-pod-network.c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.394 [INFO][5402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" HandleID="k8s-pod-network.c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--apiserver--5444bbcc64--nclhs-eth0" Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.395 [INFO][5402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:54.407603 containerd[1492]: 2024-12-16 09:45:54.402 [INFO][5395] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9" Dec 16 09:45:54.410926 containerd[1492]: time="2024-12-16T09:45:54.408046624Z" level=info msg="TearDown network for sandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\" successfully" Dec 16 09:45:54.415086 containerd[1492]: time="2024-12-16T09:45:54.415064449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:45:54.416375 containerd[1492]: time="2024-12-16T09:45:54.416330988Z" level=info msg="RemovePodSandbox \"c2dc58e582d8e0efd8ed558ad53f50b65d2518ed6a8ff37fd2d2656406f4d9b9\" returns successfully" Dec 16 09:45:54.417017 containerd[1492]: time="2024-12-16T09:45:54.417000000Z" level=info msg="StopPodSandbox for \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\"" Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.466 [WARNING][5421] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0", GenerateName:"calico-kube-controllers-c8fd9447d-", Namespace:"calico-system", SelfLink:"", UID:"165fee54-0a8a-430c-9a31-1d0a39e3bdfc", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8fd9447d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b", Pod:"calico-kube-controllers-c8fd9447d-sjkp7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd42ab9d8d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.466 [INFO][5421] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.466 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" iface="eth0" netns="" Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.466 [INFO][5421] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.466 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.497 [INFO][5427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" HandleID="k8s-pod-network.1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.497 [INFO][5427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.497 [INFO][5427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.503 [WARNING][5427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" HandleID="k8s-pod-network.1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.503 [INFO][5427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" HandleID="k8s-pod-network.1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.505 [INFO][5427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:54.512185 containerd[1492]: 2024-12-16 09:45:54.508 [INFO][5421] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:54.512997 containerd[1492]: time="2024-12-16T09:45:54.512232058Z" level=info msg="TearDown network for sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\" successfully" Dec 16 09:45:54.512997 containerd[1492]: time="2024-12-16T09:45:54.512255843Z" level=info msg="StopPodSandbox for \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\" returns successfully" Dec 16 09:45:54.514006 containerd[1492]: time="2024-12-16T09:45:54.513580391Z" level=info msg="RemovePodSandbox for \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\"" Dec 16 09:45:54.514006 containerd[1492]: time="2024-12-16T09:45:54.513635423Z" level=info msg="Forcibly stopping sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\"" Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.558 [WARNING][5445] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0", GenerateName:"calico-kube-controllers-c8fd9447d-", Namespace:"calico-system", SelfLink:"", UID:"165fee54-0a8a-430c-9a31-1d0a39e3bdfc", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 45, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8fd9447d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-12e77f9037", ContainerID:"a725326444e30d1d88c4eac7be07e4f75fc988f7b80c316dd0a70a03683ae21b", Pod:"calico-kube-controllers-c8fd9447d-sjkp7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd42ab9d8d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.559 [INFO][5445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.559 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" iface="eth0" netns="" Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.559 [INFO][5445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.559 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.584 [INFO][5452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" HandleID="k8s-pod-network.1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.584 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.584 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.589 [WARNING][5452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" HandleID="k8s-pod-network.1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.589 [INFO][5452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" HandleID="k8s-pod-network.1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Workload="ci--4081--2--1--e--12e77f9037-k8s-calico--kube--controllers--c8fd9447d--sjkp7-eth0" Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.591 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:45:54.596330 containerd[1492]: 2024-12-16 09:45:54.593 [INFO][5445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0" Dec 16 09:45:54.596330 containerd[1492]: time="2024-12-16T09:45:54.596297041Z" level=info msg="TearDown network for sandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\" successfully" Dec 16 09:45:54.600985 containerd[1492]: time="2024-12-16T09:45:54.600920879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:45:54.601083 containerd[1492]: time="2024-12-16T09:45:54.600999808Z" level=info msg="RemovePodSandbox \"1c7de2a1180618e06f064bd14467f5e7a8b0596799fc35d872e8b6aa5b4736d0\" returns successfully" Dec 16 09:45:55.611423 systemd[1]: run-containerd-runc-k8s.io-332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac-runc.zPujcp.mount: Deactivated successfully. Dec 16 09:46:10.554088 kubelet[2819]: I1216 09:46:10.553784 2819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 09:46:16.843809 systemd[1]: run-containerd-runc-k8s.io-332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac-runc.bFgJKG.mount: Deactivated successfully. Dec 16 09:46:23.101909 systemd[1]: run-containerd-runc-k8s.io-dcb7364a60e8a8fa7c41c3f956d289a42e3b09cd72e9814c8d187a4e0d65ad1f-runc.ysMKGV.mount: Deactivated successfully. Dec 16 09:46:53.091216 systemd[1]: run-containerd-runc-k8s.io-dcb7364a60e8a8fa7c41c3f956d289a42e3b09cd72e9814c8d187a4e0d65ad1f-runc.7gMpZX.mount: Deactivated successfully. Dec 16 09:46:55.604214 systemd[1]: run-containerd-runc-k8s.io-332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac-runc.muTSHD.mount: Deactivated successfully. Dec 16 09:47:16.830091 systemd[1]: run-containerd-runc-k8s.io-332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac-runc.ykvm9a.mount: Deactivated successfully. Dec 16 09:47:23.096525 systemd[1]: run-containerd-runc-k8s.io-dcb7364a60e8a8fa7c41c3f956d289a42e3b09cd72e9814c8d187a4e0d65ad1f-runc.j7gs6e.mount: Deactivated successfully. Dec 16 09:47:35.135314 update_engine[1481]: I20241216 09:47:35.135222 1481 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 09:47:35.135314 update_engine[1481]: I20241216 09:47:35.135299 1481 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 09:47:35.137862 update_engine[1481]: I20241216 09:47:35.137821 1481 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 09:47:35.139394 update_engine[1481]: I20241216 09:47:35.139262 1481 omaha_request_params.cc:62] Current group set to stable Dec 16 09:47:35.140626 update_engine[1481]: I20241216 09:47:35.139564 1481 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 09:47:35.140626 update_engine[1481]: I20241216 09:47:35.139601 1481 update_attempter.cc:643] Scheduling an action processor start. Dec 16 09:47:35.140626 update_engine[1481]: I20241216 09:47:35.139624 1481 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 09:47:35.140626 update_engine[1481]: I20241216 09:47:35.139670 1481 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 09:47:35.140626 update_engine[1481]: I20241216 09:47:35.139757 1481 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 09:47:35.140626 update_engine[1481]: I20241216 09:47:35.139768 1481 omaha_request_action.cc:272] Request: Dec 16 09:47:35.140626 update_engine[1481]: Dec 16 09:47:35.140626 update_engine[1481]: Dec 16 09:47:35.140626 update_engine[1481]: Dec 16 09:47:35.140626 update_engine[1481]: Dec 16 09:47:35.140626 update_engine[1481]: Dec 16 09:47:35.140626 update_engine[1481]: Dec 16 09:47:35.140626 update_engine[1481]: Dec 16 09:47:35.140626 update_engine[1481]: Dec 16 09:47:35.140626 update_engine[1481]: I20241216 09:47:35.139776 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 09:47:35.155148 update_engine[1481]: I20241216 09:47:35.154651 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 09:47:35.155148 update_engine[1481]: I20241216 09:47:35.155042 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 09:47:35.157204 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 09:47:35.158004 update_engine[1481]: E20241216 09:47:35.157196 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 09:47:35.158004 update_engine[1481]: I20241216 09:47:35.157274 1481 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 09:47:45.014441 update_engine[1481]: I20241216 09:47:45.014278 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 09:47:45.017089 update_engine[1481]: I20241216 09:47:45.016995 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 09:47:45.017367 update_engine[1481]: I20241216 09:47:45.017313 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 09:47:45.018001 update_engine[1481]: E20241216 09:47:45.017968 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 09:47:45.018070 update_engine[1481]: I20241216 09:47:45.018015 1481 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 16 09:47:55.014544 update_engine[1481]: I20241216 09:47:55.014447 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 09:47:55.015183 update_engine[1481]: I20241216 09:47:55.014760 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 09:47:55.015183 update_engine[1481]: I20241216 09:47:55.015034 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 09:47:55.015932 update_engine[1481]: E20241216 09:47:55.015877 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 09:47:55.016014 update_engine[1481]: I20241216 09:47:55.015937 1481 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 16 09:48:05.013665 update_engine[1481]: I20241216 09:48:05.013579 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 09:48:05.014758 update_engine[1481]: I20241216 09:48:05.013818 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 09:48:05.014758 update_engine[1481]: I20241216 09:48:05.014050 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 09:48:05.015243 update_engine[1481]: E20241216 09:48:05.014896 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 09:48:05.015243 update_engine[1481]: I20241216 09:48:05.014941 1481 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 09:48:05.017686 update_engine[1481]: I20241216 09:48:05.017633 1481 omaha_request_action.cc:617] Omaha request response: Dec 16 09:48:05.017759 update_engine[1481]: E20241216 09:48:05.017727 1481 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 16 09:48:05.017759 update_engine[1481]: I20241216 09:48:05.017750 1481 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 16 09:48:05.017759 update_engine[1481]: I20241216 09:48:05.017756 1481 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 09:48:05.018449 update_engine[1481]: I20241216 09:48:05.017762 1481 update_attempter.cc:306] Processing Done. Dec 16 09:48:05.018449 update_engine[1481]: E20241216 09:48:05.017781 1481 update_attempter.cc:619] Update failed. Dec 16 09:48:05.018449 update_engine[1481]: I20241216 09:48:05.017788 1481 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 16 09:48:05.018449 update_engine[1481]: I20241216 09:48:05.017793 1481 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 16 09:48:05.018449 update_engine[1481]: I20241216 09:48:05.017800 1481 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 16 09:48:05.018449 update_engine[1481]: I20241216 09:48:05.017902 1481 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 09:48:05.018449 update_engine[1481]: I20241216 09:48:05.017935 1481 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 09:48:05.018449 update_engine[1481]: I20241216 09:48:05.017946 1481 omaha_request_action.cc:272] Request: Dec 16 09:48:05.018449 update_engine[1481]: Dec 16 09:48:05.018449 update_engine[1481]: Dec 16 09:48:05.018449 update_engine[1481]: Dec 16 09:48:05.018449 update_engine[1481]: Dec 16 09:48:05.018449 update_engine[1481]: Dec 16 09:48:05.018449 update_engine[1481]: Dec 16 09:48:05.018449 update_engine[1481]: I20241216 09:48:05.017956 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 09:48:05.018449 update_engine[1481]: I20241216 09:48:05.018105 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 09:48:05.018449 update_engine[1481]: I20241216 09:48:05.018337 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 09:48:05.019308 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 16 09:48:05.019714 update_engine[1481]: E20241216 09:48:05.019106 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 09:48:05.019714 update_engine[1481]: I20241216 09:48:05.019169 1481 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 09:48:05.019714 update_engine[1481]: I20241216 09:48:05.019232 1481 omaha_request_action.cc:617] Omaha request response: Dec 16 09:48:05.019714 update_engine[1481]: I20241216 09:48:05.019246 1481 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 09:48:05.019714 update_engine[1481]: I20241216 09:48:05.019256 1481 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 09:48:05.019714 update_engine[1481]: I20241216 09:48:05.019276 1481 update_attempter.cc:306] Processing Done. Dec 16 09:48:05.019714 update_engine[1481]: I20241216 09:48:05.019287 1481 update_attempter.cc:310] Error event sent. Dec 16 09:48:05.019714 update_engine[1481]: I20241216 09:48:05.019301 1481 update_check_scheduler.cc:74] Next update check in 46m12s Dec 16 09:48:05.020668 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 16 09:48:16.835797 systemd[1]: run-containerd-runc-k8s.io-332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac-runc.oSiD46.mount: Deactivated successfully. Dec 16 09:48:25.603663 systemd[1]: run-containerd-runc-k8s.io-332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac-runc.2eb5TO.mount: Deactivated successfully. Dec 16 09:48:53.116369 systemd[1]: run-containerd-runc-k8s.io-dcb7364a60e8a8fa7c41c3f956d289a42e3b09cd72e9814c8d187a4e0d65ad1f-runc.aAmsYq.mount: Deactivated successfully. Dec 16 09:49:46.139630 systemd[1]: Started sshd@8-157.90.156.134:22-147.75.109.163:49522.service - OpenSSH per-connection server daemon (147.75.109.163:49522). Dec 16 09:49:47.133052 sshd[5943]: Accepted publickey for core from 147.75.109.163 port 49522 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:49:47.135949 sshd[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:49:47.141028 systemd-logind[1474]: New session 8 of user core. Dec 16 09:49:47.146469 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 09:49:48.308961 sshd[5943]: pam_unix(sshd:session): session closed for user core Dec 16 09:49:48.314121 systemd-logind[1474]: Session 8 logged out. Waiting for processes to exit. Dec 16 09:49:48.314621 systemd[1]: sshd@8-157.90.156.134:22-147.75.109.163:49522.service: Deactivated successfully. Dec 16 09:49:48.317108 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 09:49:48.318087 systemd-logind[1474]: Removed session 8. Dec 16 09:49:53.490609 systemd[1]: Started sshd@9-157.90.156.134:22-147.75.109.163:47892.service - OpenSSH per-connection server daemon (147.75.109.163:47892). Dec 16 09:49:54.490977 sshd[5982]: Accepted publickey for core from 147.75.109.163 port 47892 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:49:54.492799 sshd[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:49:54.498491 systemd-logind[1474]: New session 9 of user core. Dec 16 09:49:54.505493 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 09:49:55.264602 sshd[5982]: pam_unix(sshd:session): session closed for user core Dec 16 09:49:55.269074 systemd[1]: sshd@9-157.90.156.134:22-147.75.109.163:47892.service: Deactivated successfully. Dec 16 09:49:55.272052 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 09:49:55.273024 systemd-logind[1474]: Session 9 logged out. Waiting for processes to exit. Dec 16 09:49:55.274139 systemd-logind[1474]: Removed session 9. Dec 16 09:50:00.431607 systemd[1]: Started sshd@10-157.90.156.134:22-147.75.109.163:57786.service - OpenSSH per-connection server daemon (147.75.109.163:57786). Dec 16 09:50:01.418947 sshd[6015]: Accepted publickey for core from 147.75.109.163 port 57786 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:50:01.421063 sshd[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:50:01.426115 systemd-logind[1474]: New session 10 of user core. Dec 16 09:50:01.431574 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 09:50:02.161404 sshd[6015]: pam_unix(sshd:session): session closed for user core Dec 16 09:50:02.164843 systemd[1]: sshd@10-157.90.156.134:22-147.75.109.163:57786.service: Deactivated successfully. Dec 16 09:50:02.167753 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 09:50:02.169257 systemd-logind[1474]: Session 10 logged out. Waiting for processes to exit. Dec 16 09:50:02.170528 systemd-logind[1474]: Removed session 10. Dec 16 09:50:02.331038 systemd[1]: Started sshd@11-157.90.156.134:22-147.75.109.163:57798.service - OpenSSH per-connection server daemon (147.75.109.163:57798). Dec 16 09:50:03.304295 sshd[6029]: Accepted publickey for core from 147.75.109.163 port 57798 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:50:03.305922 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:50:03.311074 systemd-logind[1474]: New session 11 of user core. Dec 16 09:50:03.316532 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 09:50:04.104823 sshd[6029]: pam_unix(sshd:session): session closed for user core Dec 16 09:50:04.111540 systemd-logind[1474]: Session 11 logged out. Waiting for processes to exit. Dec 16 09:50:04.112175 systemd[1]: sshd@11-157.90.156.134:22-147.75.109.163:57798.service: Deactivated successfully. Dec 16 09:50:04.114482 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 09:50:04.115793 systemd-logind[1474]: Removed session 11. Dec 16 09:50:04.271094 systemd[1]: Started sshd@12-157.90.156.134:22-147.75.109.163:57806.service - OpenSSH per-connection server daemon (147.75.109.163:57806). Dec 16 09:50:05.260816 sshd[6044]: Accepted publickey for core from 147.75.109.163 port 57806 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:50:05.262982 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:50:05.267525 systemd-logind[1474]: New session 12 of user core. Dec 16 09:50:05.272573 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 09:50:06.012674 sshd[6044]: pam_unix(sshd:session): session closed for user core Dec 16 09:50:06.018523 systemd[1]: sshd@12-157.90.156.134:22-147.75.109.163:57806.service: Deactivated successfully. Dec 16 09:50:06.021740 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 09:50:06.022787 systemd-logind[1474]: Session 12 logged out. Waiting for processes to exit. Dec 16 09:50:06.024030 systemd-logind[1474]: Removed session 12. Dec 16 09:50:11.185631 systemd[1]: Started sshd@13-157.90.156.134:22-147.75.109.163:34824.service - OpenSSH per-connection server daemon (147.75.109.163:34824). Dec 16 09:50:12.164907 sshd[6060]: Accepted publickey for core from 147.75.109.163 port 34824 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:50:12.167632 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:50:12.173275 systemd-logind[1474]: New session 13 of user core. Dec 16 09:50:12.178507 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 09:50:12.914310 sshd[6060]: pam_unix(sshd:session): session closed for user core Dec 16 09:50:12.920876 systemd[1]: sshd@13-157.90.156.134:22-147.75.109.163:34824.service: Deactivated successfully. Dec 16 09:50:12.924987 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 09:50:12.926161 systemd-logind[1474]: Session 13 logged out. Waiting for processes to exit. Dec 16 09:50:12.927472 systemd-logind[1474]: Removed session 13. Dec 16 09:50:13.085673 systemd[1]: Started sshd@14-157.90.156.134:22-147.75.109.163:34830.service - OpenSSH per-connection server daemon (147.75.109.163:34830). Dec 16 09:50:14.059773 sshd[6073]: Accepted publickey for core from 147.75.109.163 port 34830 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:50:14.061635 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:50:14.067043 systemd-logind[1474]: New session 14 of user core. Dec 16 09:50:14.073658 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 09:50:15.029010 sshd[6073]: pam_unix(sshd:session): session closed for user core Dec 16 09:50:15.034762 systemd[1]: sshd@14-157.90.156.134:22-147.75.109.163:34830.service: Deactivated successfully. Dec 16 09:50:15.037977 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 09:50:15.039712 systemd-logind[1474]: Session 14 logged out. Waiting for processes to exit. Dec 16 09:50:15.041103 systemd-logind[1474]: Removed session 14. Dec 16 09:50:15.196713 systemd[1]: Started sshd@15-157.90.156.134:22-147.75.109.163:34842.service - OpenSSH per-connection server daemon (147.75.109.163:34842). Dec 16 09:50:16.195166 sshd[6085]: Accepted publickey for core from 147.75.109.163 port 34842 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:50:16.198960 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:50:16.207164 systemd-logind[1474]: New session 15 of user core. Dec 16 09:50:16.218552 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 09:50:18.745915 sshd[6085]: pam_unix(sshd:session): session closed for user core Dec 16 09:50:18.753147 systemd[1]: sshd@15-157.90.156.134:22-147.75.109.163:34842.service: Deactivated successfully. Dec 16 09:50:18.755703 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 09:50:18.757624 systemd-logind[1474]: Session 15 logged out. Waiting for processes to exit. Dec 16 09:50:18.759249 systemd-logind[1474]: Removed session 15. Dec 16 09:50:18.908736 systemd[1]: Started sshd@16-157.90.156.134:22-147.75.109.163:33708.service - OpenSSH per-connection server daemon (147.75.109.163:33708). Dec 16 09:50:19.899436 sshd[6123]: Accepted publickey for core from 147.75.109.163 port 33708 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:50:19.901395 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:50:19.906746 systemd-logind[1474]: New session 16 of user core. Dec 16 09:50:19.913502 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 09:50:20.869340 sshd[6123]: pam_unix(sshd:session): session closed for user core Dec 16 09:50:20.873017 systemd[1]: sshd@16-157.90.156.134:22-147.75.109.163:33708.service: Deactivated successfully. Dec 16 09:50:20.875306 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 09:50:20.877319 systemd-logind[1474]: Session 16 logged out. Waiting for processes to exit. Dec 16 09:50:20.879330 systemd-logind[1474]: Removed session 16. Dec 16 09:50:21.043664 systemd[1]: Started sshd@17-157.90.156.134:22-147.75.109.163:33716.service - OpenSSH per-connection server daemon (147.75.109.163:33716). Dec 16 09:50:22.020737 sshd[6139]: Accepted publickey for core from 147.75.109.163 port 33716 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:50:22.022731 sshd[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:50:22.027403 systemd-logind[1474]: New session 17 of user core. Dec 16 09:50:22.033503 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 09:50:22.765381 sshd[6139]: pam_unix(sshd:session): session closed for user core Dec 16 09:50:22.769415 systemd[1]: sshd@17-157.90.156.134:22-147.75.109.163:33716.service: Deactivated successfully. Dec 16 09:50:22.772079 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 09:50:22.772975 systemd-logind[1474]: Session 17 logged out. Waiting for processes to exit. Dec 16 09:50:22.774166 systemd-logind[1474]: Removed session 17. Dec 16 09:50:25.613315 systemd[1]: run-containerd-runc-k8s.io-332a42b695a2655e99e9d290669a79867dbf04de6665068d4fed26a1366346ac-runc.erUYr8.mount: Deactivated successfully. Dec 16 09:50:27.930114 systemd[1]: Started sshd@18-157.90.156.134:22-147.75.109.163:43454.service - OpenSSH per-connection server daemon (147.75.109.163:43454). Dec 16 09:50:28.902564 sshd[6208]: Accepted publickey for core from 147.75.109.163 port 43454 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:50:28.904274 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:50:28.909117 systemd-logind[1474]: New session 18 of user core. Dec 16 09:50:28.913503 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 09:50:29.638572 sshd[6208]: pam_unix(sshd:session): session closed for user core Dec 16 09:50:29.641585 systemd[1]: sshd@18-157.90.156.134:22-147.75.109.163:43454.service: Deactivated successfully. Dec 16 09:50:29.644379 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 09:50:29.645837 systemd-logind[1474]: Session 18 logged out. Waiting for processes to exit. Dec 16 09:50:29.647287 systemd-logind[1474]: Removed session 18. Dec 16 09:50:34.819823 systemd[1]: Started sshd@19-157.90.156.134:22-147.75.109.163:43464.service - OpenSSH per-connection server daemon (147.75.109.163:43464). Dec 16 09:50:35.819670 sshd[6221]: Accepted publickey for core from 147.75.109.163 port 43464 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:50:35.821429 sshd[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:50:35.826658 systemd-logind[1474]: New session 19 of user core. Dec 16 09:50:35.829510 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 09:50:36.582822 sshd[6221]: pam_unix(sshd:session): session closed for user core Dec 16 09:50:36.586830 systemd-logind[1474]: Session 19 logged out. Waiting for processes to exit. Dec 16 09:50:36.587856 systemd[1]: sshd@19-157.90.156.134:22-147.75.109.163:43464.service: Deactivated successfully. Dec 16 09:50:36.590289 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 09:50:36.591287 systemd-logind[1474]: Removed session 19. Dec 16 09:50:53.093133 systemd[1]: run-containerd-runc-k8s.io-dcb7364a60e8a8fa7c41c3f956d289a42e3b09cd72e9814c8d187a4e0d65ad1f-runc.RjoBNc.mount: Deactivated successfully. Dec 16 09:50:54.073694 systemd[1]: cri-containerd-d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593.scope: Deactivated successfully. Dec 16 09:50:54.075150 systemd[1]: cri-containerd-d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593.scope: Consumed 4.572s CPU time. Dec 16 09:50:54.200113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593-rootfs.mount: Deactivated successfully. Dec 16 09:50:54.235183 containerd[1492]: time="2024-12-16T09:50:54.215533757Z" level=info msg="shim disconnected" id=d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593 namespace=k8s.io Dec 16 09:50:54.242362 containerd[1492]: time="2024-12-16T09:50:54.242285916Z" level=warning msg="cleaning up after shim disconnected" id=d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593 namespace=k8s.io Dec 16 09:50:54.242546 containerd[1492]: time="2024-12-16T09:50:54.242334997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:50:54.270933 systemd[1]: cri-containerd-91c009d5637112152d980d757c5bc8c48c2b72228c479fc94e040066732d6c8a.scope: Deactivated successfully. Dec 16 09:50:54.271198 systemd[1]: cri-containerd-91c009d5637112152d980d757c5bc8c48c2b72228c479fc94e040066732d6c8a.scope: Consumed 5.934s CPU time, 22.8M memory peak, 0B memory swap peak. Dec 16 09:50:54.306253 containerd[1492]: time="2024-12-16T09:50:54.305967874Z" level=info msg="shim disconnected" id=91c009d5637112152d980d757c5bc8c48c2b72228c479fc94e040066732d6c8a namespace=k8s.io Dec 16 09:50:54.306253 containerd[1492]: time="2024-12-16T09:50:54.306042672Z" level=warning msg="cleaning up after shim disconnected" id=91c009d5637112152d980d757c5bc8c48c2b72228c479fc94e040066732d6c8a namespace=k8s.io Dec 16 09:50:54.306253 containerd[1492]: time="2024-12-16T09:50:54.306061077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:50:54.309908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91c009d5637112152d980d757c5bc8c48c2b72228c479fc94e040066732d6c8a-rootfs.mount: Deactivated successfully. Dec 16 09:50:54.322683 systemd[1]: cri-containerd-6d1f8438d6abf6a12a6d23d2b4255269594f95e164d18e4aa790c22166ccd919.scope: Deactivated successfully. Dec 16 09:50:54.322932 systemd[1]: cri-containerd-6d1f8438d6abf6a12a6d23d2b4255269594f95e164d18e4aa790c22166ccd919.scope: Consumed 1.768s CPU time, 16.7M memory peak, 0B memory swap peak. Dec 16 09:50:54.331324 kubelet[2819]: E1216 09:50:54.331100 2819 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34274->10.0.0.2:2379: read: connection timed out" Dec 16 09:50:54.369989 containerd[1492]: time="2024-12-16T09:50:54.369786956Z" level=info msg="shim disconnected" id=6d1f8438d6abf6a12a6d23d2b4255269594f95e164d18e4aa790c22166ccd919 namespace=k8s.io Dec 16 09:50:54.369989 containerd[1492]: time="2024-12-16T09:50:54.369958734Z" level=warning msg="cleaning up after shim disconnected" id=6d1f8438d6abf6a12a6d23d2b4255269594f95e164d18e4aa790c22166ccd919 namespace=k8s.io Dec 16 09:50:54.369989 containerd[1492]: time="2024-12-16T09:50:54.369967030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:50:54.371203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d1f8438d6abf6a12a6d23d2b4255269594f95e164d18e4aa790c22166ccd919-rootfs.mount: Deactivated successfully. Dec 16 09:50:54.485569 kubelet[2819]: I1216 09:50:54.485528 2819 scope.go:117] "RemoveContainer" containerID="6d1f8438d6abf6a12a6d23d2b4255269594f95e164d18e4aa790c22166ccd919" Dec 16 09:50:54.485915 kubelet[2819]: I1216 09:50:54.485887 2819 scope.go:117] "RemoveContainer" containerID="d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593" Dec 16 09:50:54.486413 kubelet[2819]: I1216 09:50:54.486089 2819 scope.go:117] "RemoveContainer" containerID="91c009d5637112152d980d757c5bc8c48c2b72228c479fc94e040066732d6c8a" Dec 16 09:50:54.498402 containerd[1492]: time="2024-12-16T09:50:54.498124506Z" level=info msg="CreateContainer within sandbox \"d60e0bcab9deeade4bc561642d975f1fed211d386cccd3b5437698ffc1ed9310\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 09:50:54.498668 containerd[1492]: time="2024-12-16T09:50:54.498473974Z" level=info msg="CreateContainer within sandbox \"ac4b2f92bb8dcd9ffce04c25cb938dc085f79fb7e10fee39321c469c95b87d28\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 09:50:54.538955 containerd[1492]: time="2024-12-16T09:50:54.538910117Z" level=info msg="CreateContainer within sandbox \"195dc45b2f8cbbc8bcefad466381f336d2a052faa666f4c8854ec0b0e094fd0b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 09:50:54.584532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482667055.mount: Deactivated successfully. Dec 16 09:50:54.598682 containerd[1492]: time="2024-12-16T09:50:54.598624446Z" level=info msg="CreateContainer within sandbox \"195dc45b2f8cbbc8bcefad466381f336d2a052faa666f4c8854ec0b0e094fd0b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8f94feef3494f58333465affd13ca54d6c9f0af94eac47df04cb51eeeb0cf849\"" Dec 16 09:50:54.599722 containerd[1492]: time="2024-12-16T09:50:54.599332137Z" level=info msg="StartContainer for \"8f94feef3494f58333465affd13ca54d6c9f0af94eac47df04cb51eeeb0cf849\"" Dec 16 09:50:54.635589 kubelet[2819]: I1216 09:50:54.635415 2819 scope.go:117] "RemoveContainer" containerID="d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593" Dec 16 09:50:54.642213 containerd[1492]: time="2024-12-16T09:50:54.642162645Z" level=info msg="CreateContainer within sandbox \"ac4b2f92bb8dcd9ffce04c25cb938dc085f79fb7e10fee39321c469c95b87d28\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b96362b07ccfe12d64e498b4a17686f9876cd1139e73313c33981723755914ad\"" Dec 16 09:50:54.642483 systemd[1]: Started cri-containerd-8f94feef3494f58333465affd13ca54d6c9f0af94eac47df04cb51eeeb0cf849.scope - libcontainer container 8f94feef3494f58333465affd13ca54d6c9f0af94eac47df04cb51eeeb0cf849. Dec 16 09:50:54.649207 containerd[1492]: time="2024-12-16T09:50:54.648932965Z" level=info msg="CreateContainer within sandbox \"d60e0bcab9deeade4bc561642d975f1fed211d386cccd3b5437698ffc1ed9310\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"52c29d3c49d12cd15d290ca4cb0a4ac1f916d33eda448d14b8ede5a367958815\"" Dec 16 09:50:54.654420 containerd[1492]: time="2024-12-16T09:50:54.654159786Z" level=info msg="StartContainer for \"b96362b07ccfe12d64e498b4a17686f9876cd1139e73313c33981723755914ad\"" Dec 16 09:50:54.656159 containerd[1492]: time="2024-12-16T09:50:54.656136407Z" level=info msg="RemoveContainer for \"d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593\"" Dec 16 09:50:54.663742 containerd[1492]: time="2024-12-16T09:50:54.663691150Z" level=info msg="StartContainer for \"52c29d3c49d12cd15d290ca4cb0a4ac1f916d33eda448d14b8ede5a367958815\"" Dec 16 09:50:54.709513 systemd[1]: Started cri-containerd-b96362b07ccfe12d64e498b4a17686f9876cd1139e73313c33981723755914ad.scope - libcontainer container b96362b07ccfe12d64e498b4a17686f9876cd1139e73313c33981723755914ad. Dec 16 09:50:54.709821 containerd[1492]: time="2024-12-16T09:50:54.709784521Z" level=info msg="RemoveContainer for \"d927b526f06702deeac1af519e36488b0f77f64bf82d2e7e4a4b2599e91bb593\" returns successfully" Dec 16 09:50:54.730521 systemd[1]: Started cri-containerd-52c29d3c49d12cd15d290ca4cb0a4ac1f916d33eda448d14b8ede5a367958815.scope - libcontainer container 52c29d3c49d12cd15d290ca4cb0a4ac1f916d33eda448d14b8ede5a367958815. Dec 16 09:50:54.735305 containerd[1492]: time="2024-12-16T09:50:54.735264294Z" level=info msg="StartContainer for \"8f94feef3494f58333465affd13ca54d6c9f0af94eac47df04cb51eeeb0cf849\" returns successfully" Dec 16 09:50:54.787833 containerd[1492]: time="2024-12-16T09:50:54.787661972Z" level=info msg="StartContainer for \"b96362b07ccfe12d64e498b4a17686f9876cd1139e73313c33981723755914ad\" returns successfully" Dec 16 09:50:54.801843 containerd[1492]: time="2024-12-16T09:50:54.801421627Z" level=info msg="StartContainer for \"52c29d3c49d12cd15d290ca4cb0a4ac1f916d33eda448d14b8ede5a367958815\" returns successfully" Dec 16 09:50:57.320097 systemd[1]: cri-containerd-8f94feef3494f58333465affd13ca54d6c9f0af94eac47df04cb51eeeb0cf849.scope: Deactivated successfully. Dec 16 09:50:57.344252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f94feef3494f58333465affd13ca54d6c9f0af94eac47df04cb51eeeb0cf849-rootfs.mount: Deactivated successfully. Dec 16 09:50:57.354287 containerd[1492]: time="2024-12-16T09:50:57.354223441Z" level=info msg="shim disconnected" id=8f94feef3494f58333465affd13ca54d6c9f0af94eac47df04cb51eeeb0cf849 namespace=k8s.io Dec 16 09:50:57.354287 containerd[1492]: time="2024-12-16T09:50:57.354278313Z" level=warning msg="cleaning up after shim disconnected" id=8f94feef3494f58333465affd13ca54d6c9f0af94eac47df04cb51eeeb0cf849 namespace=k8s.io Dec 16 09:50:57.354287 containerd[1492]: time="2024-12-16T09:50:57.354288431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:50:57.503586 kubelet[2819]: I1216 09:50:57.503561 2819 scope.go:117] "RemoveContainer" containerID="8f94feef3494f58333465affd13ca54d6c9f0af94eac47df04cb51eeeb0cf849" Dec 16 09:50:57.505473 kubelet[2819]: E1216 09:50:57.505436 2819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7bc55997bb-n5gzt_tigera-operator(76fa4222-2ade-4e40-b52b-edd5199a31e9)\"" pod="tigera-operator/tigera-operator-7bc55997bb-n5gzt" podUID="76fa4222-2ade-4e40-b52b-edd5199a31e9" Dec 16 09:50:58.753310 kubelet[2819]: E1216 09:50:58.727033 2819 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34072->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-1-e-12e77f9037.18119f731c33312d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-1-e-12e77f9037,UID:1ff1e5ebeb2f1a891a31198978ed06c5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-e-12e77f9037,},FirstTimestamp:2024-12-16 09:50:48.257524013 +0000 UTC m=+354.970828486,LastTimestamp:2024-12-16 09:50:48.257524013 +0000 UTC m=+354.970828486,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-e-12e77f9037,}"