Jan 30 06:15:43.897822 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 06:15:43.897857 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 06:15:43.897866 kernel: BIOS-provided physical RAM map: Jan 30 06:15:43.897872 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 06:15:43.897879 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 06:15:43.897888 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 06:15:43.897899 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 30 06:15:43.897910 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 30 06:15:43.897925 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 06:15:43.897930 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 06:15:43.897936 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 06:15:43.897941 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 06:15:43.897946 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 06:15:43.897951 kernel: NX (Execute Disable) protection: active Jan 30 06:15:43.897960 kernel: APIC: Static calls initialized Jan 30 06:15:43.897966 kernel: SMBIOS 3.0.0 present. Jan 30 06:15:43.897972 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 30 06:15:43.897977 kernel: Hypervisor detected: KVM Jan 30 06:15:43.897983 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 06:15:43.897988 kernel: kvm-clock: using sched offset of 2827729400 cycles Jan 30 06:15:43.897994 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 06:15:43.897999 kernel: tsc: Detected 2445.404 MHz processor Jan 30 06:15:43.898005 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 06:15:43.898014 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 06:15:43.898019 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 30 06:15:43.898025 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 06:15:43.898031 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 06:15:43.898036 kernel: Using GB pages for direct mapping Jan 30 06:15:43.898042 kernel: ACPI: Early table checksum verification disabled Jan 30 06:15:43.898047 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 30 06:15:43.898053 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 06:15:43.898058 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 06:15:43.898066 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 06:15:43.898072 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 30 06:15:43.898077 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 06:15:43.898083 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 06:15:43.898089 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 06:15:43.898094 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 06:15:43.898100 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 30 06:15:43.898106 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 30 06:15:43.898117 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 30 06:15:43.898122 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 30 06:15:43.898128 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 30 06:15:43.898134 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 30 06:15:43.898140 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 30 06:15:43.898146 kernel: No NUMA configuration found Jan 30 06:15:43.898152 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 30 06:15:43.898160 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 30 06:15:43.898166 kernel: Zone ranges: Jan 30 06:15:43.898171 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 06:15:43.898177 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 30 06:15:43.898183 kernel: Normal empty Jan 30 06:15:43.898192 kernel: Movable zone start for each node Jan 30 06:15:43.898203 kernel: Early memory node ranges Jan 30 06:15:43.898214 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 06:15:43.898221 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 30 06:15:43.898230 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 30 06:15:43.898236 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 06:15:43.898242 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 06:15:43.898248 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 06:15:43.898254 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 06:15:43.898260 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 06:15:43.898266 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 06:15:43.898271 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 06:15:43.898277 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 06:15:43.898285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 06:15:43.898291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 06:15:43.898297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 06:15:43.898303 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 06:15:43.898309 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 06:15:43.898314 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 06:15:43.898320 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 06:15:43.898326 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 06:15:43.898332 kernel: Booting paravirtualized kernel on KVM Jan 30 06:15:43.898340 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 06:15:43.898346 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 06:15:43.898352 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 06:15:43.898358 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 06:15:43.898363 kernel: pcpu-alloc: [0] 0 1 Jan 30 06:15:43.898369 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 06:15:43.898376 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 06:15:43.898382 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 06:15:43.898388 kernel: random: crng init done Jan 30 06:15:43.898396 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 06:15:43.898402 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 06:15:43.898408 kernel: Fallback order for Node 0: 0 Jan 30 06:15:43.898414 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 30 06:15:43.898419 kernel: Policy zone: DMA32 Jan 30 06:15:43.898425 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 06:15:43.898431 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 30 06:15:43.898437 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 06:15:43.898443 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 06:15:43.898457 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 06:15:43.898468 kernel: Dynamic Preempt: voluntary Jan 30 06:15:43.898474 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 06:15:43.898484 kernel: rcu: RCU event tracing is enabled. Jan 30 06:15:43.898491 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 06:15:43.898497 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 06:15:43.898505 kernel: Rude variant of Tasks RCU enabled. Jan 30 06:15:43.898516 kernel: Tracing variant of Tasks RCU enabled. Jan 30 06:15:43.898527 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 06:15:43.898537 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 06:15:43.898543 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 06:15:43.898549 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 06:15:43.898555 kernel: Console: colour VGA+ 80x25 Jan 30 06:15:43.898561 kernel: printk: console [tty0] enabled Jan 30 06:15:43.898567 kernel: printk: console [ttyS0] enabled Jan 30 06:15:43.898578 kernel: ACPI: Core revision 20230628 Jan 30 06:15:43.898590 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 06:15:43.898602 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 06:15:43.898618 kernel: x2apic enabled Jan 30 06:15:43.898630 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 06:15:43.898642 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 06:15:43.898653 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 06:15:43.898665 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Jan 30 06:15:43.898676 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 06:15:43.898684 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 06:15:43.898693 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 06:15:43.898722 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 06:15:43.898731 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 06:15:43.898740 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 06:15:43.898752 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 06:15:43.898763 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 06:15:43.898769 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 06:15:43.898775 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 06:15:43.898782 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 06:15:43.898788 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 06:15:43.900837 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 06:15:43.900849 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 06:15:43.900856 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 06:15:43.900863 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 06:15:43.900870 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 06:15:43.900876 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 06:15:43.900882 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 06:15:43.900889 kernel: Freeing SMP alternatives memory: 32K Jan 30 06:15:43.900905 kernel: pid_max: default: 32768 minimum: 301 Jan 30 06:15:43.900917 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 06:15:43.900929 kernel: landlock: Up and running. Jan 30 06:15:43.900940 kernel: SELinux: Initializing. Jan 30 06:15:43.900947 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 06:15:43.900953 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 06:15:43.900959 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 06:15:43.900966 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 06:15:43.900972 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 06:15:43.900982 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 06:15:43.900989 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 06:15:43.900995 kernel: ... version: 0 Jan 30 06:15:43.901001 kernel: ... bit width: 48 Jan 30 06:15:43.901008 kernel: ... generic registers: 6 Jan 30 06:15:43.901014 kernel: ... value mask: 0000ffffffffffff Jan 30 06:15:43.901021 kernel: ... max period: 00007fffffffffff Jan 30 06:15:43.901027 kernel: ... fixed-purpose events: 0 Jan 30 06:15:43.901033 kernel: ... event mask: 000000000000003f Jan 30 06:15:43.901041 kernel: signal: max sigframe size: 1776 Jan 30 06:15:43.901048 kernel: rcu: Hierarchical SRCU implementation. Jan 30 06:15:43.901054 kernel: rcu: Max phase no-delay instances is 400. Jan 30 06:15:43.901061 kernel: smp: Bringing up secondary CPUs ... Jan 30 06:15:43.901741 kernel: smpboot: x86: Booting SMP configuration: Jan 30 06:15:43.901758 kernel: .... node #0, CPUs: #1 Jan 30 06:15:43.901768 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 06:15:43.901775 kernel: smpboot: Max logical packages: 1 Jan 30 06:15:43.901782 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Jan 30 06:15:43.901792 kernel: devtmpfs: initialized Jan 30 06:15:43.901812 kernel: x86/mm: Memory block size: 128MB Jan 30 06:15:43.901819 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 06:15:43.901841 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 06:15:43.901854 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 06:15:43.901863 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 06:15:43.901870 kernel: audit: initializing netlink subsys (disabled) Jan 30 06:15:43.901876 kernel: audit: type=2000 audit(1738217743.453:1): state=initialized audit_enabled=0 res=1 Jan 30 06:15:43.901883 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 06:15:43.901893 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 06:15:43.901899 kernel: cpuidle: using governor menu Jan 30 06:15:43.901911 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 06:15:43.901923 kernel: dca service started, version 1.12.1 Jan 30 06:15:43.901936 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 06:15:43.901948 kernel: PCI: Using configuration type 1 for base access Jan 30 06:15:43.901960 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 06:15:43.901972 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 06:15:43.901979 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 06:15:43.901991 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 06:15:43.902003 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 06:15:43.902015 kernel: ACPI: Added _OSI(Module Device) Jan 30 06:15:43.903836 kernel: ACPI: Added _OSI(Processor Device) Jan 30 06:15:43.903847 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 06:15:43.903854 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 06:15:43.903861 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 06:15:43.903867 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 06:15:43.903874 kernel: ACPI: Interpreter enabled Jan 30 06:15:43.903884 kernel: ACPI: PM: (supports S0 S5) Jan 30 06:15:43.903891 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 06:15:43.903897 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 06:15:43.903904 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 06:15:43.903910 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 06:15:43.903920 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 06:15:43.904123 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 06:15:43.904245 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 06:15:43.904395 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 06:15:43.904407 kernel: PCI host bridge to bus 0000:00 Jan 30 06:15:43.904547 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 06:15:43.904667 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 06:15:43.904769 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 06:15:43.905972 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 30 06:15:43.906087 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 06:15:43.906189 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 06:15:43.906283 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 06:15:43.906439 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 06:15:43.906604 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 30 06:15:43.906720 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 30 06:15:43.907583 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 30 06:15:43.907705 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 30 06:15:43.907873 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 30 06:15:43.908039 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 06:15:43.908161 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 06:15:43.908267 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 30 06:15:43.908377 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 06:15:43.908480 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 30 06:15:43.908596 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 06:15:43.908703 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 30 06:15:43.909933 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 06:15:43.910077 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 30 06:15:43.910198 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 06:15:43.910313 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 30 06:15:43.910436 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 06:15:43.910581 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 30 06:15:43.910754 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 06:15:43.910960 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 30 06:15:43.911105 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 06:15:43.911234 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 30 06:15:43.911360 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 06:15:43.911467 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 30 06:15:43.911582 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 06:15:43.911685 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 06:15:43.913906 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 06:15:43.914070 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 30 06:15:43.914189 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 30 06:15:43.914304 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 06:15:43.914434 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 06:15:43.914607 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 06:15:43.914757 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 30 06:15:43.916986 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 06:15:43.917132 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 30 06:15:43.917284 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 06:15:43.917476 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 06:15:43.917589 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 06:15:43.917728 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 06:15:43.919984 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 30 06:15:43.920185 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 06:15:43.920346 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 06:15:43.920459 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 06:15:43.920579 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 06:15:43.920689 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 30 06:15:43.920901 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 30 06:15:43.921015 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 06:15:43.921177 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 06:15:43.921366 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 06:15:43.921504 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 06:15:43.921665 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 06:15:43.922904 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 06:15:43.923029 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 06:15:43.923165 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 06:15:43.923292 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 06:15:43.923459 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 30 06:15:43.923597 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 30 06:15:43.923708 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 06:15:43.927071 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 06:15:43.927223 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 06:15:43.927359 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 06:15:43.927475 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 30 06:15:43.927644 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 30 06:15:43.927770 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 06:15:43.927951 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 06:15:43.928062 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 06:15:43.928075 kernel: acpiphp: Slot [0] registered Jan 30 06:15:43.928221 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 06:15:43.928337 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 30 06:15:43.928446 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 30 06:15:43.928562 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 30 06:15:43.928665 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 06:15:43.928766 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 06:15:43.928941 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 06:15:43.928954 kernel: acpiphp: Slot [0-2] registered Jan 30 06:15:43.929065 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 06:15:43.929194 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 06:15:43.929301 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 06:15:43.929311 kernel: acpiphp: Slot [0-3] registered Jan 30 06:15:43.929421 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 06:15:43.929552 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 06:15:43.929692 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 06:15:43.929704 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 06:15:43.929712 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 06:15:43.929718 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 06:15:43.929724 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 06:15:43.929731 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 06:15:43.929742 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 06:15:43.929748 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 06:15:43.929755 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 06:15:43.929761 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 06:15:43.929772 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 06:15:43.929784 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 06:15:43.930864 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 06:15:43.930879 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 06:15:43.930885 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 06:15:43.930897 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 06:15:43.930904 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 06:15:43.930910 kernel: iommu: Default domain type: Translated Jan 30 06:15:43.930917 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 06:15:43.930923 kernel: PCI: Using ACPI for IRQ routing Jan 30 06:15:43.930930 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 06:15:43.930937 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 06:15:43.930943 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 30 06:15:43.931097 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 06:15:43.931234 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 06:15:43.931342 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 06:15:43.931351 kernel: vgaarb: loaded Jan 30 06:15:43.931358 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 06:15:43.931365 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 06:15:43.931371 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 06:15:43.931377 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 06:15:43.931386 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 06:15:43.931397 kernel: pnp: PnP ACPI init Jan 30 06:15:43.931532 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 06:15:43.931543 kernel: pnp: PnP ACPI: found 5 devices Jan 30 06:15:43.931551 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 06:15:43.931558 kernel: NET: Registered PF_INET protocol family Jan 30 06:15:43.931564 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 06:15:43.931571 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 06:15:43.931578 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 06:15:43.931584 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 06:15:43.931599 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 06:15:43.931611 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 06:15:43.931623 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 06:15:43.931630 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 06:15:43.931637 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 06:15:43.931643 kernel: NET: Registered PF_XDP protocol family Jan 30 06:15:43.931755 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 06:15:43.932962 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 06:15:43.933113 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 06:15:43.933247 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 06:15:43.933377 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 06:15:43.933516 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 06:15:43.933659 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 06:15:43.935127 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 06:15:43.935288 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 06:15:43.935424 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 06:15:43.935565 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 06:15:43.935711 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 06:15:43.936928 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 06:15:43.937079 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 06:15:43.937222 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 06:15:43.937356 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 06:15:43.937497 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 06:15:43.937653 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 06:15:43.938833 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 06:15:43.938967 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 06:15:43.939075 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 06:15:43.939180 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 06:15:43.939307 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 06:15:43.939413 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 06:15:43.939517 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 06:15:43.939641 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 30 06:15:43.941871 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 06:15:43.941996 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 06:15:43.942120 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 06:15:43.942232 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 30 06:15:43.942337 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 06:15:43.942438 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 06:15:43.942589 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 06:15:43.942698 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 30 06:15:43.944513 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 06:15:43.946863 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 06:15:43.946979 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 06:15:43.947124 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 06:15:43.948420 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 06:15:43.948531 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 30 06:15:43.948627 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 06:15:43.948721 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 06:15:43.948879 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 06:15:43.949009 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 06:15:43.949126 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 06:15:43.949245 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 06:15:43.949364 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 06:15:43.949465 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 06:15:43.949574 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 06:15:43.949673 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 06:15:43.949784 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 06:15:43.949940 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 06:15:43.950051 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 06:15:43.950153 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 06:15:43.950260 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 30 06:15:43.950361 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 06:15:43.950460 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 06:15:43.950572 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 30 06:15:43.950672 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 30 06:15:43.950770 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 06:15:43.950925 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 30 06:15:43.951075 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 06:15:43.951191 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 06:15:43.951206 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 06:15:43.951214 kernel: PCI: CLS 0 bytes, default 64 Jan 30 06:15:43.951220 kernel: Initialise system trusted keyrings Jan 30 06:15:43.951227 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 06:15:43.951234 kernel: Key type asymmetric registered Jan 30 06:15:43.951240 kernel: Asymmetric key parser 'x509' registered Jan 30 06:15:43.951247 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 06:15:43.951254 kernel: io scheduler mq-deadline registered Jan 30 06:15:43.951261 kernel: io scheduler kyber registered Jan 30 06:15:43.951267 kernel: io scheduler bfq registered Jan 30 06:15:43.951377 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 06:15:43.951483 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 06:15:43.951588 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 06:15:43.951691 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 06:15:43.951855 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 06:15:43.951971 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 06:15:43.952103 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 06:15:43.952231 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 06:15:43.952341 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 06:15:43.952444 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 06:15:43.952549 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 06:15:43.952650 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 06:15:43.952752 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 06:15:43.952941 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 06:15:43.953082 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 06:15:43.953214 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 06:15:43.953235 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 06:15:43.953371 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 30 06:15:43.953502 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 30 06:15:43.953515 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 06:15:43.953525 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 30 06:15:43.953538 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 06:15:43.953548 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 06:15:43.953558 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 06:15:43.953568 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 06:15:43.953583 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 06:15:43.953726 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 06:15:43.953745 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 06:15:43.953964 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 06:15:43.954091 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T06:15:43 UTC (1738217743) Jan 30 06:15:43.954216 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 06:15:43.954229 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 06:15:43.954239 kernel: NET: Registered PF_INET6 protocol family Jan 30 06:15:43.954255 kernel: Segment Routing with IPv6 Jan 30 06:15:43.954266 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 06:15:43.954276 kernel: NET: Registered PF_PACKET protocol family Jan 30 06:15:43.954285 kernel: Key type dns_resolver registered Jan 30 06:15:43.954295 kernel: IPI shorthand broadcast: enabled Jan 30 06:15:43.954305 kernel: sched_clock: Marking stable (1121007591, 132863199)->(1262593857, -8723067) Jan 30 06:15:43.954314 kernel: registered taskstats version 1 Jan 30 06:15:43.954324 kernel: Loading compiled-in X.509 certificates Jan 30 06:15:43.954334 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 06:15:43.954346 kernel: Key type .fscrypt registered Jan 30 06:15:43.954356 kernel: Key type fscrypt-provisioning registered Jan 30 06:15:43.954366 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 06:15:43.954375 kernel: ima: Allocated hash algorithm: sha1 Jan 30 06:15:43.954388 kernel: ima: No architecture policies found Jan 30 06:15:43.954398 kernel: clk: Disabling unused clocks Jan 30 06:15:43.954408 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 06:15:43.954418 kernel: Write protecting the kernel read-only data: 36864k Jan 30 06:15:43.954431 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 06:15:43.954440 kernel: Run /init as init process Jan 30 06:15:43.954451 kernel: with arguments: Jan 30 06:15:43.954461 kernel: /init Jan 30 06:15:43.954470 kernel: with environment: Jan 30 06:15:43.954480 kernel: HOME=/ Jan 30 06:15:43.954490 kernel: TERM=linux Jan 30 06:15:43.954499 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 06:15:43.954511 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 06:15:43.954527 systemd[1]: Detected virtualization kvm. Jan 30 06:15:43.954537 systemd[1]: Detected architecture x86-64. Jan 30 06:15:43.954547 systemd[1]: Running in initrd. Jan 30 06:15:43.954556 systemd[1]: No hostname configured, using default hostname. Jan 30 06:15:43.954567 systemd[1]: Hostname set to . Jan 30 06:15:43.954577 systemd[1]: Initializing machine ID from VM UUID. Jan 30 06:15:43.954587 systemd[1]: Queued start job for default target initrd.target. Jan 30 06:15:43.954597 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 06:15:43.954610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 06:15:43.954621 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 06:15:43.954632 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 06:15:43.954642 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 06:15:43.954652 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 06:15:43.954664 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 06:15:43.954678 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 06:15:43.954691 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 06:15:43.954703 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 06:15:43.954713 systemd[1]: Reached target paths.target - Path Units. Jan 30 06:15:43.954723 systemd[1]: Reached target slices.target - Slice Units. Jan 30 06:15:43.954736 systemd[1]: Reached target swap.target - Swaps. Jan 30 06:15:43.954747 systemd[1]: Reached target timers.target - Timer Units. Jan 30 06:15:43.954757 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 06:15:43.954768 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 06:15:43.954781 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 06:15:43.954791 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 06:15:43.954854 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 06:15:43.954865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 06:15:43.954876 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 06:15:43.954886 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 06:15:43.954896 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 06:15:43.954907 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 06:15:43.954921 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 06:15:43.954932 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 06:15:43.954943 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 06:15:43.954955 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 06:15:43.955000 systemd-journald[187]: Collecting audit messages is disabled. Jan 30 06:15:43.955029 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 06:15:43.955040 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 06:15:43.955051 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 06:15:43.955061 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 06:15:43.955071 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 06:15:43.955085 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 06:15:43.955095 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 06:15:43.955106 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 06:15:43.955116 kernel: Bridge firewalling registered Jan 30 06:15:43.955125 systemd-journald[187]: Journal started Jan 30 06:15:43.955151 systemd-journald[187]: Runtime Journal (/run/log/journal/47b0f14091a14512907f6d8983ad76fa) is 4.8M, max 38.4M, 33.6M free. Jan 30 06:15:43.920254 systemd-modules-load[188]: Inserted module 'overlay' Jan 30 06:15:43.947598 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 30 06:15:43.984865 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 06:15:43.987208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 06:15:43.987967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 06:15:43.997429 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 06:15:43.999973 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 06:15:44.002126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 06:15:44.004879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 06:15:44.014568 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 06:15:44.017129 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 06:15:44.026843 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 06:15:44.029020 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 06:15:44.035176 dracut-cmdline[217]: dracut-dracut-053 Jan 30 06:15:44.039452 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 06:15:44.039004 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 06:15:44.072449 systemd-resolved[225]: Positive Trust Anchors: Jan 30 06:15:44.073109 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 06:15:44.073136 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 06:15:44.078092 systemd-resolved[225]: Defaulting to hostname 'linux'. Jan 30 06:15:44.079106 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 06:15:44.079849 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 06:15:44.116851 kernel: SCSI subsystem initialized Jan 30 06:15:44.125831 kernel: Loading iSCSI transport class v2.0-870. Jan 30 06:15:44.134827 kernel: iscsi: registered transport (tcp) Jan 30 06:15:44.153847 kernel: iscsi: registered transport (qla4xxx) Jan 30 06:15:44.153920 kernel: QLogic iSCSI HBA Driver Jan 30 06:15:44.196848 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 06:15:44.201930 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 06:15:44.226827 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 06:15:44.226880 kernel: device-mapper: uevent: version 1.0.3 Jan 30 06:15:44.226893 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 06:15:44.268836 kernel: raid6: avx2x4 gen() 32389 MB/s Jan 30 06:15:44.285827 kernel: raid6: avx2x2 gen() 29529 MB/s Jan 30 06:15:44.302946 kernel: raid6: avx2x1 gen() 24982 MB/s Jan 30 06:15:44.303000 kernel: raid6: using algorithm avx2x4 gen() 32389 MB/s Jan 30 06:15:44.321023 kernel: raid6: .... xor() 4501 MB/s, rmw enabled Jan 30 06:15:44.321069 kernel: raid6: using avx2x2 recovery algorithm Jan 30 06:15:44.339846 kernel: xor: automatically using best checksumming function avx Jan 30 06:15:44.476855 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 06:15:44.490026 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 06:15:44.495959 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 06:15:44.524051 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 30 06:15:44.528213 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 06:15:44.534948 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 06:15:44.549321 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 30 06:15:44.580375 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 06:15:44.586957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 06:15:44.651013 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 06:15:44.663205 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 06:15:44.674161 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 06:15:44.675472 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 06:15:44.678721 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 06:15:44.679496 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 06:15:44.688743 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 06:15:44.698237 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 06:15:44.760627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 06:15:44.767635 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 06:15:44.760748 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 06:15:44.768479 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 06:15:44.769018 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 06:15:44.769195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 06:15:44.769730 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 06:15:44.779082 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 06:15:44.783379 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 06:15:44.783406 kernel: AES CTR mode by8 optimization enabled Jan 30 06:15:44.817836 kernel: scsi host0: Virtio SCSI HBA Jan 30 06:15:44.824867 kernel: libata version 3.00 loaded. Jan 30 06:15:44.824917 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 06:15:44.831599 kernel: ACPI: bus type USB registered Jan 30 06:15:44.831638 kernel: usbcore: registered new interface driver usbfs Jan 30 06:15:44.831649 kernel: usbcore: registered new interface driver hub Jan 30 06:15:44.831659 kernel: usbcore: registered new device driver usb Jan 30 06:15:44.871112 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 06:15:44.906496 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 06:15:44.906514 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 06:15:44.906665 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 06:15:44.906793 kernel: scsi host1: ahci Jan 30 06:15:44.906970 kernel: scsi host2: ahci Jan 30 06:15:44.907097 kernel: scsi host3: ahci Jan 30 06:15:44.907223 kernel: scsi host4: ahci Jan 30 06:15:44.907352 kernel: scsi host5: ahci Jan 30 06:15:44.907483 kernel: scsi host6: ahci Jan 30 06:15:44.907605 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jan 30 06:15:44.907620 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jan 30 06:15:44.907629 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jan 30 06:15:44.907637 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jan 30 06:15:44.907646 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 06:15:44.928194 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jan 30 06:15:44.928210 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 06:15:44.928421 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jan 30 06:15:44.928442 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 06:15:44.928636 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 06:15:44.928866 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 06:15:44.929327 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 06:15:44.930140 kernel: hub 1-0:1.0: USB hub found Jan 30 06:15:44.930315 kernel: hub 1-0:1.0: 4 ports detected Jan 30 06:15:44.930468 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 30 06:15:44.937297 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 06:15:44.937461 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 06:15:44.937596 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 30 06:15:44.937729 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 06:15:44.937908 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 06:15:44.938070 kernel: hub 2-0:1.0: USB hub found Jan 30 06:15:44.938283 kernel: hub 2-0:1.0: 4 ports detected Jan 30 06:15:44.938481 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 06:15:44.938500 kernel: GPT:17805311 != 80003071 Jan 30 06:15:44.938516 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 06:15:44.938527 kernel: GPT:17805311 != 80003071 Jan 30 06:15:44.938535 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 06:15:44.938557 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 06:15:44.938577 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 06:15:44.883011 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 06:15:44.891043 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 06:15:44.923257 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 06:15:45.167844 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 06:15:45.220754 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 06:15:45.220859 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 06:15:45.220873 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 06:15:45.223647 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 06:15:45.229834 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 06:15:45.229870 kernel: ata1.00: applying bridge limits Jan 30 06:15:45.230843 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 06:15:45.233844 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 06:15:45.233888 kernel: ata1.00: configured for UDMA/100 Jan 30 06:15:45.235264 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 06:15:45.289232 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 06:15:45.301323 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 06:15:45.301408 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (454) Jan 30 06:15:45.301483 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 30 06:15:45.310849 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (453) Jan 30 06:15:45.312907 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 06:15:45.323720 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 06:15:45.326996 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 06:15:45.330167 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 06:15:45.339886 kernel: usbcore: registered new interface driver usbhid Jan 30 06:15:45.339913 kernel: usbhid: USB HID core driver Jan 30 06:15:45.339923 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 30 06:15:45.339932 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 06:15:45.345261 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 06:15:45.346618 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 06:15:45.355044 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 06:15:45.361718 disk-uuid[576]: Primary Header is updated. Jan 30 06:15:45.361718 disk-uuid[576]: Secondary Entries is updated. Jan 30 06:15:45.361718 disk-uuid[576]: Secondary Header is updated. Jan 30 06:15:45.367869 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 06:15:45.373824 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 06:15:46.375978 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 06:15:46.377884 disk-uuid[578]: The operation has completed successfully. Jan 30 06:15:46.434517 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 06:15:46.434653 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 06:15:46.446920 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 06:15:46.451728 sh[595]: Success Jan 30 06:15:46.465297 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 06:15:46.519358 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 06:15:46.533209 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 06:15:46.533970 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 06:15:46.551359 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 06:15:46.551413 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 06:15:46.553359 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 06:15:46.556260 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 06:15:46.556278 kernel: BTRFS info (device dm-0): using free space tree Jan 30 06:15:46.565832 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 06:15:46.568333 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 06:15:46.569395 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 06:15:46.574998 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 06:15:46.576997 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 06:15:46.595715 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 06:15:46.595748 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 06:15:46.595758 kernel: BTRFS info (device sda6): using free space tree Jan 30 06:15:46.599956 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 06:15:46.599989 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 06:15:46.613175 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 06:15:46.613855 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 06:15:46.620661 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 06:15:46.628229 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 06:15:46.692347 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 06:15:46.698996 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 06:15:46.713071 ignition[705]: Ignition 2.19.0 Jan 30 06:15:46.713082 ignition[705]: Stage: fetch-offline Jan 30 06:15:46.713117 ignition[705]: no configs at "/usr/lib/ignition/base.d" Jan 30 06:15:46.713127 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 06:15:46.715405 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 06:15:46.713205 ignition[705]: parsed url from cmdline: "" Jan 30 06:15:46.713209 ignition[705]: no config URL provided Jan 30 06:15:46.713214 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 06:15:46.713222 ignition[705]: no config at "/usr/lib/ignition/user.ign" Jan 30 06:15:46.713227 ignition[705]: failed to fetch config: resource requires networking Jan 30 06:15:46.713392 ignition[705]: Ignition finished successfully Jan 30 06:15:46.721569 systemd-networkd[777]: lo: Link UP Jan 30 06:15:46.721575 systemd-networkd[777]: lo: Gained carrier Jan 30 06:15:46.724236 systemd-networkd[777]: Enumeration completed Jan 30 06:15:46.724310 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 06:15:46.725308 systemd[1]: Reached target network.target - Network. Jan 30 06:15:46.726247 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 06:15:46.726251 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 06:15:46.727044 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 06:15:46.727049 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 06:15:46.728038 systemd-networkd[777]: eth0: Link UP Jan 30 06:15:46.728042 systemd-networkd[777]: eth0: Gained carrier Jan 30 06:15:46.728049 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 06:15:46.733202 systemd-networkd[777]: eth1: Link UP Jan 30 06:15:46.733206 systemd-networkd[777]: eth1: Gained carrier Jan 30 06:15:46.733213 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 06:15:46.734012 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 06:15:46.748103 ignition[784]: Ignition 2.19.0 Jan 30 06:15:46.748115 ignition[784]: Stage: fetch Jan 30 06:15:46.748271 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 30 06:15:46.748289 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 06:15:46.748371 ignition[784]: parsed url from cmdline: "" Jan 30 06:15:46.748375 ignition[784]: no config URL provided Jan 30 06:15:46.748380 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 06:15:46.748388 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jan 30 06:15:46.748403 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 06:15:46.748541 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 06:15:46.786873 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 06:15:46.792921 systemd-networkd[777]: eth0: DHCPv4 address 78.47.103.36/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 06:15:46.949046 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 06:15:46.955747 ignition[784]: GET result: OK Jan 30 06:15:46.955849 ignition[784]: parsing config with SHA512: e585b45040d31cbd3657c4668c1b8f20c90c91ee6f2580b9eff84f679de0e198eac0d5c8bef91d8b8d0097559396e0478b3d43d957eded88d89918a76bebeecd Jan 30 06:15:46.959082 unknown[784]: fetched base config from "system" Jan 30 06:15:46.959096 unknown[784]: fetched base config from "system" Jan 30 06:15:46.959325 ignition[784]: fetch: fetch complete Jan 30 06:15:46.959102 unknown[784]: fetched user config from "hetzner" Jan 30 06:15:46.959330 ignition[784]: fetch: fetch passed Jan 30 06:15:46.962077 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 06:15:46.959371 ignition[784]: Ignition finished successfully Jan 30 06:15:46.969146 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 06:15:46.985186 ignition[792]: Ignition 2.19.0 Jan 30 06:15:46.985204 ignition[792]: Stage: kargs Jan 30 06:15:46.985400 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 30 06:15:46.985414 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 06:15:46.989248 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 06:15:46.986556 ignition[792]: kargs: kargs passed Jan 30 06:15:46.986607 ignition[792]: Ignition finished successfully Jan 30 06:15:46.997118 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 06:15:47.010638 ignition[799]: Ignition 2.19.0 Jan 30 06:15:47.010650 ignition[799]: Stage: disks Jan 30 06:15:47.010789 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 30 06:15:47.010833 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 06:15:47.011488 ignition[799]: disks: disks passed Jan 30 06:15:47.013074 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 06:15:47.011524 ignition[799]: Ignition finished successfully Jan 30 06:15:47.014407 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 06:15:47.015373 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 06:15:47.016591 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 06:15:47.017667 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 06:15:47.018611 systemd[1]: Reached target basic.target - Basic System. Jan 30 06:15:47.031012 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 06:15:47.047372 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 06:15:47.050646 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 06:15:47.055926 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 06:15:47.147000 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 06:15:47.147461 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 06:15:47.148540 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 06:15:47.153885 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 06:15:47.156905 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 06:15:47.159974 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 06:15:47.162375 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 06:15:47.163487 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 06:15:47.169894 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 06:15:47.172310 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (815) Jan 30 06:15:47.176365 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 06:15:47.176393 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 06:15:47.178441 kernel: BTRFS info (device sda6): using free space tree Jan 30 06:15:47.180190 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 06:15:47.187123 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 06:15:47.187148 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 06:15:47.190868 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 06:15:47.226167 coreos-metadata[817]: Jan 30 06:15:47.226 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 06:15:47.228373 coreos-metadata[817]: Jan 30 06:15:47.228 INFO Fetch successful Jan 30 06:15:47.228373 coreos-metadata[817]: Jan 30 06:15:47.228 INFO wrote hostname ci-4081-3-0-a-a10ab07ed7 to /sysroot/etc/hostname Jan 30 06:15:47.229613 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 06:15:47.232343 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 06:15:47.236250 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 30 06:15:47.240893 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 06:15:47.245698 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 06:15:47.336666 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 06:15:47.343015 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 06:15:47.346996 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 06:15:47.352844 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 06:15:47.378621 ignition[932]: INFO : Ignition 2.19.0 Jan 30 06:15:47.378621 ignition[932]: INFO : Stage: mount Jan 30 06:15:47.378621 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 06:15:47.378621 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 06:15:47.378621 ignition[932]: INFO : mount: mount passed Jan 30 06:15:47.378621 ignition[932]: INFO : Ignition finished successfully Jan 30 06:15:47.382315 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 06:15:47.388948 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 06:15:47.389679 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 06:15:47.549502 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 06:15:47.558011 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 06:15:47.571840 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Jan 30 06:15:47.574980 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 06:15:47.575023 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 06:15:47.577578 kernel: BTRFS info (device sda6): using free space tree Jan 30 06:15:47.583815 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 06:15:47.583850 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 06:15:47.586710 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 06:15:47.607895 ignition[960]: INFO : Ignition 2.19.0 Jan 30 06:15:47.608748 ignition[960]: INFO : Stage: files Jan 30 06:15:47.609844 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 06:15:47.609844 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 06:15:47.611694 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 30 06:15:47.612550 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 06:15:47.612550 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 06:15:47.616202 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 06:15:47.617473 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 06:15:47.617473 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 06:15:47.616742 unknown[960]: wrote ssh authorized keys file for user: core Jan 30 06:15:47.620148 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 06:15:47.620148 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 06:15:47.708007 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 06:15:47.982562 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 06:15:47.982562 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 06:15:47.986867 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 06:15:48.540257 systemd-networkd[777]: eth0: Gained IPv6LL Jan 30 06:15:48.540951 systemd-networkd[777]: eth1: Gained IPv6LL Jan 30 06:15:48.655883 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 06:15:48.919273 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 06:15:48.919273 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 06:15:48.921165 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 06:15:48.921165 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 06:15:48.921165 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 06:15:48.921165 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 06:15:48.921165 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 06:15:48.921165 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 06:15:48.921165 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 06:15:48.921165 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 30 06:15:48.921165 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 06:15:48.921165 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 06:15:48.921165 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 06:15:48.921165 ignition[960]: INFO : files: files passed Jan 30 06:15:48.921165 ignition[960]: INFO : Ignition finished successfully Jan 30 06:15:48.923248 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 06:15:48.932950 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 06:15:48.937189 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 06:15:48.937976 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 06:15:48.938067 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 06:15:48.949728 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 06:15:48.951055 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 06:15:48.951771 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 06:15:48.952622 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 06:15:48.953368 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 06:15:48.957911 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 06:15:48.981084 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 06:15:48.981212 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 06:15:48.982398 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 06:15:48.983283 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 06:15:48.984318 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 06:15:48.988919 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 06:15:48.999028 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 06:15:49.004941 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 06:15:49.012967 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 06:15:49.013962 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 06:15:49.014871 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 06:15:49.015915 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 06:15:49.016074 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 06:15:49.017310 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 06:15:49.018033 systemd[1]: Stopped target basic.target - Basic System. Jan 30 06:15:49.019055 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 06:15:49.019939 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 06:15:49.020849 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 06:15:49.021898 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 06:15:49.022927 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 06:15:49.024013 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 06:15:49.025008 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 06:15:49.026037 systemd[1]: Stopped target swap.target - Swaps. Jan 30 06:15:49.026963 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 06:15:49.027068 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 06:15:49.028159 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 06:15:49.028825 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 06:15:49.029761 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 06:15:49.030085 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 06:15:49.031161 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 06:15:49.031461 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 06:15:49.032627 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 06:15:49.032791 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 06:15:49.033937 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 06:15:49.034084 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 06:15:49.034922 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 06:15:49.035032 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 06:15:49.042286 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 06:15:49.045017 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 06:15:49.045486 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 06:15:49.045627 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 06:15:49.047198 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 06:15:49.047294 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 06:15:49.059411 ignition[1012]: INFO : Ignition 2.19.0 Jan 30 06:15:49.059411 ignition[1012]: INFO : Stage: umount Jan 30 06:15:49.059411 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 06:15:49.059411 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 06:15:49.059411 ignition[1012]: INFO : umount: umount passed Jan 30 06:15:49.059411 ignition[1012]: INFO : Ignition finished successfully Jan 30 06:15:49.060833 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 06:15:49.060942 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 06:15:49.063286 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 06:15:49.063380 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 06:15:49.069606 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 06:15:49.069689 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 06:15:49.070208 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 06:15:49.070254 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 06:15:49.070693 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 06:15:49.070734 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 06:15:49.072981 systemd[1]: Stopped target network.target - Network. Jan 30 06:15:49.073644 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 06:15:49.073696 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 06:15:49.074296 systemd[1]: Stopped target paths.target - Path Units. Jan 30 06:15:49.074762 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 06:15:49.083857 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 06:15:49.085321 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 06:15:49.086180 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 06:15:49.088409 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 06:15:49.088476 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 06:15:49.095357 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 06:15:49.095425 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 06:15:49.096156 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 06:15:49.096225 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 06:15:49.097231 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 06:15:49.097295 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 06:15:49.098231 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 06:15:49.099267 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 06:15:49.102104 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 06:15:49.104528 systemd-networkd[777]: eth0: DHCPv6 lease lost Jan 30 06:15:49.109204 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 06:15:49.109306 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 06:15:49.109869 systemd-networkd[777]: eth1: DHCPv6 lease lost Jan 30 06:15:49.110428 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 06:15:49.110514 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 06:15:49.112634 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 06:15:49.112749 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 06:15:49.114718 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 06:15:49.114985 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 06:15:49.116423 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 06:15:49.116489 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 06:15:49.123873 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 06:15:49.125072 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 06:15:49.125123 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 06:15:49.125623 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 06:15:49.125666 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 06:15:49.126536 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 06:15:49.126579 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 06:15:49.127530 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 06:15:49.127575 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 06:15:49.128776 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 06:15:49.139229 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 06:15:49.139339 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 06:15:49.146407 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 06:15:49.146573 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 06:15:49.147669 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 06:15:49.147713 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 06:15:49.148518 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 06:15:49.148559 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 06:15:49.149511 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 06:15:49.149557 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 06:15:49.150967 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 06:15:49.151011 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 06:15:49.152001 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 06:15:49.152045 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 06:15:49.162936 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 06:15:49.164143 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 06:15:49.164773 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 06:15:49.165968 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 06:15:49.166014 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 06:15:49.166982 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 06:15:49.167027 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 06:15:49.167730 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 06:15:49.167772 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 06:15:49.170179 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 06:15:49.170272 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 06:15:49.171402 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 06:15:49.177964 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 06:15:49.184375 systemd[1]: Switching root. Jan 30 06:15:49.213834 systemd-journald[187]: Journal stopped Jan 30 06:15:50.155589 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 30 06:15:50.155657 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 06:15:50.155670 kernel: SELinux: policy capability open_perms=1 Jan 30 06:15:50.155680 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 06:15:50.155692 kernel: SELinux: policy capability always_check_network=0 Jan 30 06:15:50.155702 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 06:15:50.155712 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 06:15:50.155721 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 06:15:50.155730 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 06:15:50.155739 kernel: audit: type=1403 audit(1738217749.345:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 06:15:50.155749 systemd[1]: Successfully loaded SELinux policy in 44.106ms. Jan 30 06:15:50.155793 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.434ms. Jan 30 06:15:50.157951 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 06:15:50.157970 systemd[1]: Detected virtualization kvm. Jan 30 06:15:50.157981 systemd[1]: Detected architecture x86-64. Jan 30 06:15:50.157991 systemd[1]: Detected first boot. Jan 30 06:15:50.158002 systemd[1]: Hostname set to . Jan 30 06:15:50.158018 systemd[1]: Initializing machine ID from VM UUID. Jan 30 06:15:50.158029 zram_generator::config[1054]: No configuration found. Jan 30 06:15:50.158050 systemd[1]: Populated /etc with preset unit settings. Jan 30 06:15:50.158061 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 06:15:50.158074 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 06:15:50.158084 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 06:15:50.158095 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 06:15:50.158105 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 06:15:50.158116 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 06:15:50.158125 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 06:15:50.158141 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 06:15:50.158151 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 06:15:50.158164 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 06:15:50.158178 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 06:15:50.158190 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 06:15:50.158200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 06:15:50.158210 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 06:15:50.158220 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 06:15:50.158230 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 06:15:50.158240 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 06:15:50.158250 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 06:15:50.158262 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 06:15:50.158272 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 06:15:50.158283 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 06:15:50.158293 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 06:15:50.158303 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 06:15:50.158314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 06:15:50.158326 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 06:15:50.158336 systemd[1]: Reached target slices.target - Slice Units. Jan 30 06:15:50.158346 systemd[1]: Reached target swap.target - Swaps. Jan 30 06:15:50.158356 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 06:15:50.158366 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 06:15:50.158376 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 06:15:50.158387 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 06:15:50.158397 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 06:15:50.158408 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 06:15:50.158418 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 06:15:50.158433 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 06:15:50.158446 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 06:15:50.158456 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 06:15:50.158466 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 06:15:50.158477 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 06:15:50.158489 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 06:15:50.158500 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 06:15:50.158510 systemd[1]: Reached target machines.target - Containers. Jan 30 06:15:50.158521 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 06:15:50.158532 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 06:15:50.158542 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 06:15:50.158552 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 06:15:50.158562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 06:15:50.158574 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 06:15:50.158587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 06:15:50.158597 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 06:15:50.158607 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 06:15:50.158618 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 06:15:50.158628 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 06:15:50.158639 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 06:15:50.158649 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 06:15:50.158659 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 06:15:50.158671 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 06:15:50.158681 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 06:15:50.158691 kernel: fuse: init (API version 7.39) Jan 30 06:15:50.158701 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 06:15:50.158712 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 06:15:50.158721 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 06:15:50.158732 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 06:15:50.158742 systemd[1]: Stopped verity-setup.service. Jan 30 06:15:50.158753 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 06:15:50.158765 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 06:15:50.158784 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 06:15:50.162250 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 06:15:50.162272 kernel: ACPI: bus type drm_connector registered Jan 30 06:15:50.162283 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 06:15:50.162299 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 06:15:50.162309 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 06:15:50.162319 kernel: loop: module loaded Jan 30 06:15:50.162347 systemd-journald[1137]: Collecting audit messages is disabled. Jan 30 06:15:50.162368 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 06:15:50.162379 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 06:15:50.162389 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 06:15:50.162402 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 06:15:50.162413 systemd-journald[1137]: Journal started Jan 30 06:15:50.162435 systemd-journald[1137]: Runtime Journal (/run/log/journal/47b0f14091a14512907f6d8983ad76fa) is 4.8M, max 38.4M, 33.6M free. Jan 30 06:15:49.872673 systemd[1]: Queued start job for default target multi-user.target. Jan 30 06:15:49.896937 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 06:15:50.164520 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 06:15:49.897443 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 06:15:50.166935 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 06:15:50.167091 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 06:15:50.167907 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 06:15:50.168134 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 06:15:50.169967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 06:15:50.170143 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 06:15:50.170988 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 06:15:50.171135 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 06:15:50.172082 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 06:15:50.172234 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 06:15:50.173153 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 06:15:50.174057 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 06:15:50.174876 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 06:15:50.191405 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 06:15:50.198043 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 06:15:50.201848 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 06:15:50.203929 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 06:15:50.203968 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 06:15:50.205715 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 06:15:50.211969 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 06:15:50.217640 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 06:15:50.219680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 06:15:50.223914 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 06:15:50.232330 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 06:15:50.233011 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 06:15:50.236927 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 06:15:50.237569 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 06:15:50.246177 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 06:15:50.248973 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 06:15:50.251974 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 06:15:50.255478 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 06:15:50.257053 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 06:15:50.258554 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 06:15:50.269858 systemd-journald[1137]: Time spent on flushing to /var/log/journal/47b0f14091a14512907f6d8983ad76fa is 54.638ms for 1137 entries. Jan 30 06:15:50.269858 systemd-journald[1137]: System Journal (/var/log/journal/47b0f14091a14512907f6d8983ad76fa) is 8.0M, max 584.8M, 576.8M free. Jan 30 06:15:50.358151 systemd-journald[1137]: Received client request to flush runtime journal. Jan 30 06:15:50.358524 kernel: loop0: detected capacity change from 0 to 8 Jan 30 06:15:50.358555 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 06:15:50.358573 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 06:15:50.286887 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 06:15:50.287512 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 06:15:50.295004 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 06:15:50.322176 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 06:15:50.357949 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 30 06:15:50.357968 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 30 06:15:50.359569 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 06:15:50.360559 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 06:15:50.364364 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 06:15:50.373043 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 06:15:50.384985 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 06:15:50.385708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 06:15:50.394902 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 06:15:50.396836 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 06:15:50.418252 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 06:15:50.435293 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 06:15:50.441987 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 06:15:50.449819 kernel: loop3: detected capacity change from 0 to 218376 Jan 30 06:15:50.460852 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jan 30 06:15:50.461213 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jan 30 06:15:50.468292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 06:15:50.503860 kernel: loop4: detected capacity change from 0 to 8 Jan 30 06:15:50.509840 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 06:15:50.528002 kernel: loop6: detected capacity change from 0 to 142488 Jan 30 06:15:50.553834 kernel: loop7: detected capacity change from 0 to 218376 Jan 30 06:15:50.568100 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 06:15:50.570510 (sd-merge)[1202]: Merged extensions into '/usr'. Jan 30 06:15:50.580040 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 06:15:50.580140 systemd[1]: Reloading... Jan 30 06:15:50.682900 zram_generator::config[1229]: No configuration found. Jan 30 06:15:50.822072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 06:15:50.842617 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 06:15:50.863689 systemd[1]: Reloading finished in 283 ms. Jan 30 06:15:50.889010 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 06:15:50.890138 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 06:15:50.899973 systemd[1]: Starting ensure-sysext.service... Jan 30 06:15:50.903503 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 06:15:50.915888 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Jan 30 06:15:50.915900 systemd[1]: Reloading... Jan 30 06:15:50.942161 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 06:15:50.942471 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 06:15:50.943720 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 06:15:50.944095 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 30 06:15:50.944229 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 30 06:15:50.948449 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 06:15:50.949929 systemd-tmpfiles[1272]: Skipping /boot Jan 30 06:15:50.961904 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 06:15:50.961976 systemd-tmpfiles[1272]: Skipping /boot Jan 30 06:15:50.998425 zram_generator::config[1304]: No configuration found. Jan 30 06:15:51.093671 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 06:15:51.134658 systemd[1]: Reloading finished in 218 ms. Jan 30 06:15:51.152773 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 06:15:51.157202 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 06:15:51.171936 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 06:15:51.176977 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 06:15:51.180117 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 06:15:51.184947 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 06:15:51.191352 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 06:15:51.199500 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 06:15:51.205725 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 06:15:51.206128 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 06:15:51.213004 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 06:15:51.215863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 06:15:51.222012 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 06:15:51.222782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 06:15:51.234416 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 06:15:51.236052 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 06:15:51.244854 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 06:15:51.245049 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 06:15:51.245272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 06:15:51.245358 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 06:15:51.257372 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 06:15:51.259267 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 06:15:51.261476 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 06:15:51.261650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 06:15:51.264286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 06:15:51.264452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 06:15:51.265323 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 06:15:51.265484 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 06:15:51.269436 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Jan 30 06:15:51.280377 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 06:15:51.281613 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 06:15:51.287325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 06:15:51.298150 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 06:15:51.302537 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 06:15:51.306997 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 06:15:51.307617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 06:15:51.316551 augenrules[1378]: No rules Jan 30 06:15:51.318002 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 06:15:51.319865 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 06:15:51.320992 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 06:15:51.322199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 06:15:51.322368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 06:15:51.328467 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 06:15:51.330344 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 06:15:51.331850 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 06:15:51.332687 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 06:15:51.333679 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 06:15:51.334406 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 06:15:51.341287 systemd[1]: Finished ensure-sysext.service. Jan 30 06:15:51.350395 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 06:15:51.351571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 06:15:51.351792 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 06:15:51.367943 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 06:15:51.368469 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 06:15:51.368544 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 06:15:51.378925 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 06:15:51.380844 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 06:15:51.390967 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 06:15:51.452549 systemd-resolved[1347]: Positive Trust Anchors: Jan 30 06:15:51.452566 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 06:15:51.452592 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 06:15:51.459319 systemd-resolved[1347]: Using system hostname 'ci-4081-3-0-a-a10ab07ed7'. Jan 30 06:15:51.460877 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 06:15:51.461528 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 06:15:51.466178 systemd-networkd[1401]: lo: Link UP Jan 30 06:15:51.466188 systemd-networkd[1401]: lo: Gained carrier Jan 30 06:15:51.468882 systemd-networkd[1401]: Enumeration completed Jan 30 06:15:51.468958 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 06:15:51.469603 systemd[1]: Reached target network.target - Network. Jan 30 06:15:51.477006 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 06:15:51.482522 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 06:15:51.483132 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 06:15:51.497249 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 06:15:51.559029 systemd-networkd[1401]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 06:15:51.559042 systemd-networkd[1401]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 06:15:51.560048 systemd-networkd[1401]: eth1: Link UP Jan 30 06:15:51.560285 systemd-networkd[1401]: eth1: Gained carrier Jan 30 06:15:51.560303 systemd-networkd[1401]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 06:15:51.566249 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 06:15:51.566257 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 06:15:51.567306 systemd-networkd[1401]: eth0: Link UP Jan 30 06:15:51.567314 systemd-networkd[1401]: eth0: Gained carrier Jan 30 06:15:51.567326 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 06:15:51.574823 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 30 06:15:51.601932 kernel: ACPI: button: Power Button [PWRF] Jan 30 06:15:51.604885 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 06:15:51.606347 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 30 06:15:51.607060 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 06:15:51.607166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 06:15:51.608824 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1405) Jan 30 06:15:51.611009 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 06:15:51.611128 systemd-networkd[1401]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 06:15:51.620990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 06:15:51.622680 systemd-networkd[1401]: eth0: DHCPv4 address 78.47.103.36/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 06:15:51.624392 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Jan 30 06:15:51.625510 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 06:15:51.627300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 06:15:51.627343 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 06:15:51.627360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 06:15:51.628033 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 06:15:51.629226 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 06:15:51.644609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 06:15:51.646645 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 06:15:51.648262 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 06:15:51.649207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 06:15:51.668746 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 06:15:51.668855 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 06:15:51.677503 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 06:15:51.687026 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 06:15:51.691961 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 06:15:51.697902 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 30 06:15:51.706314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 06:15:51.712886 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 06:15:51.727926 kernel: EDAC MC: Ver: 3.0.0 Jan 30 06:15:51.744117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 06:15:51.745118 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 06:15:51.750052 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 30 06:15:51.754833 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 30 06:15:51.756182 kernel: Console: switching to colour dummy device 80x25 Jan 30 06:15:51.756977 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 06:15:51.757008 kernel: [drm] features: -context_init Jan 30 06:15:51.759026 kernel: [drm] number of scanouts: 1 Jan 30 06:15:51.759059 kernel: [drm] number of cap sets: 0 Jan 30 06:15:51.761891 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 06:15:51.772038 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 06:15:51.772084 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 06:15:51.781825 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 06:15:51.782388 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 06:15:51.782660 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 06:15:51.793058 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 06:15:51.796459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 06:15:51.796679 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 06:15:51.800717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 06:15:51.872534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 06:15:52.855489 systemd-resolved[1347]: Clock change detected. Flushing caches. Jan 30 06:15:52.855557 systemd-timesyncd[1404]: Contacted time server 185.248.188.98:123 (0.flatcar.pool.ntp.org). Jan 30 06:15:52.855606 systemd-timesyncd[1404]: Initial clock synchronization to Thu 2025-01-30 06:15:52.855439 UTC. Jan 30 06:15:52.859508 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 06:15:52.865345 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 06:15:52.877411 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 06:15:52.909491 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 06:15:52.912007 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 06:15:52.912151 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 06:15:52.912394 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 06:15:52.912568 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 06:15:52.912920 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 06:15:52.913201 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 06:15:52.913315 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 06:15:52.913444 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 06:15:52.913483 systemd[1]: Reached target paths.target - Path Units. Jan 30 06:15:52.913553 systemd[1]: Reached target timers.target - Timer Units. Jan 30 06:15:52.914887 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 06:15:52.917849 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 06:15:52.924898 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 06:15:52.926365 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 06:15:52.926855 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 06:15:52.927002 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 06:15:52.927612 systemd[1]: Reached target basic.target - Basic System. Jan 30 06:15:52.929001 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 06:15:52.929035 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 06:15:52.931225 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 06:15:52.938824 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 06:15:52.943234 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 06:15:52.943317 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 06:15:52.947217 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 06:15:52.952238 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 06:15:52.952693 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 06:15:52.956274 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 06:15:52.964280 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 06:15:52.971054 jq[1468]: false Jan 30 06:15:52.972265 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 06:15:52.976244 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 06:15:52.983257 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 06:15:52.994299 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 06:15:52.996863 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 06:15:52.997280 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 06:15:53.008244 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 06:15:53.011572 coreos-metadata[1466]: Jan 30 06:15:53.011 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 06:15:53.016671 coreos-metadata[1466]: Jan 30 06:15:53.016 INFO Fetch successful Jan 30 06:15:53.016846 coreos-metadata[1466]: Jan 30 06:15:53.016 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 06:15:53.018198 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 06:15:53.025877 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 06:15:53.033177 coreos-metadata[1466]: Jan 30 06:15:53.032 INFO Fetch successful Jan 30 06:15:53.035526 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 06:15:53.035736 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 06:15:53.051170 jq[1481]: true Jan 30 06:15:53.066205 update_engine[1478]: I20250130 06:15:53.063057 1478 main.cc:92] Flatcar Update Engine starting Jan 30 06:15:53.066438 dbus-daemon[1467]: [system] SELinux support is enabled Jan 30 06:15:53.069800 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 06:15:53.070018 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 06:15:53.072852 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 06:15:53.087240 update_engine[1478]: I20250130 06:15:53.086303 1478 update_check_scheduler.cc:74] Next update check in 6m1s Jan 30 06:15:53.090972 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 06:15:53.092023 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 06:15:53.093777 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 06:15:53.094755 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 06:15:53.094774 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 06:15:53.098719 systemd[1]: Started update-engine.service - Update Engine. Jan 30 06:15:53.106517 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 06:15:53.109996 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 06:15:53.110219 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 06:15:53.125913 extend-filesystems[1469]: Found loop4 Jan 30 06:15:53.136631 jq[1491]: true Jan 30 06:15:53.144598 extend-filesystems[1469]: Found loop5 Jan 30 06:15:53.144598 extend-filesystems[1469]: Found loop6 Jan 30 06:15:53.144598 extend-filesystems[1469]: Found loop7 Jan 30 06:15:53.144598 extend-filesystems[1469]: Found sda Jan 30 06:15:53.144598 extend-filesystems[1469]: Found sda1 Jan 30 06:15:53.144598 extend-filesystems[1469]: Found sda2 Jan 30 06:15:53.144598 extend-filesystems[1469]: Found sda3 Jan 30 06:15:53.144598 extend-filesystems[1469]: Found usr Jan 30 06:15:53.144598 extend-filesystems[1469]: Found sda4 Jan 30 06:15:53.144598 extend-filesystems[1469]: Found sda6 Jan 30 06:15:53.144598 extend-filesystems[1469]: Found sda7 Jan 30 06:15:53.144598 extend-filesystems[1469]: Found sda9 Jan 30 06:15:53.144598 extend-filesystems[1469]: Checking size of /dev/sda9 Jan 30 06:15:53.183825 tar[1486]: linux-amd64/LICENSE Jan 30 06:15:53.183825 tar[1486]: linux-amd64/helm Jan 30 06:15:53.194048 extend-filesystems[1469]: Resized partition /dev/sda9 Jan 30 06:15:53.186650 systemd-logind[1477]: New seat seat0. Jan 30 06:15:53.206816 extend-filesystems[1518]: resize2fs 1.47.1 (20-May-2024) Jan 30 06:15:53.195857 systemd-logind[1477]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 06:15:53.217749 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 06:15:53.195876 systemd-logind[1477]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 06:15:53.196101 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 06:15:53.303485 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 06:15:53.308815 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 06:15:53.330191 bash[1532]: Updated "/home/core/.ssh/authorized_keys" Jan 30 06:15:53.331495 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 06:15:53.346529 systemd[1]: Starting sshkeys.service... Jan 30 06:15:53.383038 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1386) Jan 30 06:15:53.415667 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 06:15:53.431823 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 06:15:53.444627 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 06:15:53.456089 extend-filesystems[1518]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 06:15:53.456089 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 06:15:53.456089 extend-filesystems[1518]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 06:15:53.462812 extend-filesystems[1469]: Resized filesystem in /dev/sda9 Jan 30 06:15:53.462812 extend-filesystems[1469]: Found sr0 Jan 30 06:15:53.465367 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 06:15:53.466706 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 06:15:53.472101 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 06:15:53.481181 coreos-metadata[1546]: Jan 30 06:15:53.481 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 06:15:53.494049 coreos-metadata[1546]: Jan 30 06:15:53.494 INFO Fetch successful Jan 30 06:15:53.496535 unknown[1546]: wrote ssh authorized keys file for user: core Jan 30 06:15:53.501044 containerd[1500]: time="2025-01-30T06:15:53.500972817Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 06:15:53.538637 update-ssh-keys[1556]: Updated "/home/core/.ssh/authorized_keys" Jan 30 06:15:53.540189 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 06:15:53.546269 containerd[1500]: time="2025-01-30T06:15:53.545199976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 06:15:53.543493 systemd[1]: Finished sshkeys.service. Jan 30 06:15:53.550256 systemd-networkd[1401]: eth1: Gained IPv6LL Jan 30 06:15:53.556261 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 06:15:53.567408 containerd[1500]: time="2025-01-30T06:15:53.557270183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 06:15:53.567408 containerd[1500]: time="2025-01-30T06:15:53.557317892Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 06:15:53.567408 containerd[1500]: time="2025-01-30T06:15:53.557336918Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 06:15:53.567408 containerd[1500]: time="2025-01-30T06:15:53.557510654Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 06:15:53.567408 containerd[1500]: time="2025-01-30T06:15:53.557529088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 06:15:53.567408 containerd[1500]: time="2025-01-30T06:15:53.557594682Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 06:15:53.567408 containerd[1500]: time="2025-01-30T06:15:53.557607586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 06:15:53.567408 containerd[1500]: time="2025-01-30T06:15:53.557785920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 06:15:53.567408 containerd[1500]: time="2025-01-30T06:15:53.557800918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 06:15:53.567408 containerd[1500]: time="2025-01-30T06:15:53.557817379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 06:15:53.567408 containerd[1500]: time="2025-01-30T06:15:53.557826286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 06:15:53.565375 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 06:15:53.568820 containerd[1500]: time="2025-01-30T06:15:53.557912678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 06:15:53.568820 containerd[1500]: time="2025-01-30T06:15:53.558170392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 06:15:53.568820 containerd[1500]: time="2025-01-30T06:15:53.558278694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 06:15:53.568820 containerd[1500]: time="2025-01-30T06:15:53.558290997Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 06:15:53.568820 containerd[1500]: time="2025-01-30T06:15:53.558384112Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 06:15:53.568820 containerd[1500]: time="2025-01-30T06:15:53.558447822Z" level=info msg="metadata content store policy set" policy=shared Jan 30 06:15:53.574352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.583961693Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584013871Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584029891Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584044779Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584068213Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584273628Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584463895Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584562691Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584577959Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584589040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584601313Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584612464Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584626561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 06:15:53.585526 containerd[1500]: time="2025-01-30T06:15:53.584639515Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584651627Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584662447Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584672777Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584682775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584708544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584720196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584733701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584745103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584755352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584765982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584775289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584786270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584796819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.585758 containerd[1500]: time="2025-01-30T06:15:53.584809223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.585734 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584819512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584829330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584841263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584854207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584870237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584879805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584889393Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584926773Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584942202Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584950939Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584960507Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584968511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584978370Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 06:15:53.586017 containerd[1500]: time="2025-01-30T06:15:53.584990753Z" level=info msg="NRI interface is disabled by configuration." Jan 30 06:15:53.586252 containerd[1500]: time="2025-01-30T06:15:53.584999920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 06:15:53.588635 containerd[1500]: time="2025-01-30T06:15:53.587313600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 06:15:53.588635 containerd[1500]: time="2025-01-30T06:15:53.587369534Z" level=info msg="Connect containerd service" Jan 30 06:15:53.588635 containerd[1500]: time="2025-01-30T06:15:53.587401605Z" level=info msg="using legacy CRI server" Jan 30 06:15:53.588635 containerd[1500]: time="2025-01-30T06:15:53.587408207Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 06:15:53.588635 containerd[1500]: time="2025-01-30T06:15:53.587519506Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 06:15:53.588635 containerd[1500]: time="2025-01-30T06:15:53.588021427Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 06:15:53.590016 containerd[1500]: time="2025-01-30T06:15:53.589998806Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 06:15:53.590256 containerd[1500]: time="2025-01-30T06:15:53.590238545Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 06:15:53.590501 containerd[1500]: time="2025-01-30T06:15:53.590478245Z" level=info msg="Start subscribing containerd event" Jan 30 06:15:53.590598 containerd[1500]: time="2025-01-30T06:15:53.590585266Z" level=info msg="Start recovering state" Jan 30 06:15:53.590877 containerd[1500]: time="2025-01-30T06:15:53.590864820Z" level=info msg="Start event monitor" Jan 30 06:15:53.590925 containerd[1500]: time="2025-01-30T06:15:53.590914434Z" level=info msg="Start snapshots syncer" Jan 30 06:15:53.590966 containerd[1500]: time="2025-01-30T06:15:53.590956061Z" level=info msg="Start cni network conf syncer for default" Jan 30 06:15:53.591003 containerd[1500]: time="2025-01-30T06:15:53.590993882Z" level=info msg="Start streaming server" Jan 30 06:15:53.591133 containerd[1500]: time="2025-01-30T06:15:53.591100843Z" level=info msg="containerd successfully booted in 0.092404s" Jan 30 06:15:53.592234 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 06:15:53.636173 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 06:15:54.000479 tar[1486]: linux-amd64/README.md Jan 30 06:15:54.016067 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 06:15:54.031625 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 06:15:54.052681 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 06:15:54.064334 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 06:15:54.071406 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 06:15:54.071622 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 06:15:54.084340 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 06:15:54.095644 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 06:15:54.104406 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 06:15:54.106946 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 06:15:54.110337 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 06:15:54.318314 systemd-networkd[1401]: eth0: Gained IPv6LL Jan 30 06:15:54.604499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:15:54.605527 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 06:15:54.608699 systemd[1]: Startup finished in 1.251s (kernel) + 5.645s (initrd) + 4.328s (userspace) = 11.225s. Jan 30 06:15:54.610447 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 06:15:55.137536 kubelet[1597]: E0130 06:15:55.137455 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 06:15:55.141793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 06:15:55.141980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 06:16:05.392355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 06:16:05.401638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:16:05.536252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:16:05.536423 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 06:16:05.568768 kubelet[1615]: E0130 06:16:05.568710 1615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 06:16:05.574637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 06:16:05.574823 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 06:16:15.825551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 06:16:15.835424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:16:16.004632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:16:16.015369 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 06:16:16.050357 kubelet[1631]: E0130 06:16:16.050278 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 06:16:16.053326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 06:16:16.053549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 06:16:25.552497 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 06:16:25.561354 systemd[1]: Started sshd@0-78.47.103.36:22-125.74.237.67:51826.service - OpenSSH per-connection server daemon (125.74.237.67:51826). Jan 30 06:16:26.303827 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 06:16:26.309276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:16:26.438723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:16:26.442724 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 06:16:26.476314 kubelet[1649]: E0130 06:16:26.476269 1649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 06:16:26.479936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 06:16:26.480136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 06:16:36.599956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 06:16:36.606416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:16:36.684363 systemd[1]: Started sshd@1-78.47.103.36:22-183.110.116.126:37030.service - OpenSSH per-connection server daemon (183.110.116.126:37030). Jan 30 06:16:36.783916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:16:36.787787 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 06:16:36.822073 kubelet[1668]: E0130 06:16:36.821996 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 06:16:36.825270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 06:16:36.825488 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 06:16:38.429175 update_engine[1478]: I20250130 06:16:38.428983 1478 update_attempter.cc:509] Updating boot flags... Jan 30 06:16:38.464849 sshd[1661]: Invalid user age from 183.110.116.126 port 37030 Jan 30 06:16:38.491174 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1684) Jan 30 06:16:38.556183 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1684) Jan 30 06:16:38.806552 sshd[1661]: Received disconnect from 183.110.116.126 port 37030:11: Bye Bye [preauth] Jan 30 06:16:38.806825 sshd[1661]: Disconnected from invalid user age 183.110.116.126 port 37030 [preauth] Jan 30 06:16:38.810174 systemd[1]: sshd@1-78.47.103.36:22-183.110.116.126:37030.service: Deactivated successfully. Jan 30 06:16:44.649582 systemd[1]: Started sshd@2-78.47.103.36:22-139.178.89.65:53144.service - OpenSSH per-connection server daemon (139.178.89.65:53144). Jan 30 06:16:45.646081 sshd[1696]: Accepted publickey for core from 139.178.89.65 port 53144 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:16:45.648723 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:16:45.657548 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 06:16:45.673356 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 06:16:45.676021 systemd-logind[1477]: New session 1 of user core. Jan 30 06:16:45.687805 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 06:16:45.698421 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 06:16:45.701926 (systemd)[1700]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 06:16:45.802266 systemd[1700]: Queued start job for default target default.target. Jan 30 06:16:45.809297 systemd[1700]: Created slice app.slice - User Application Slice. Jan 30 06:16:45.809323 systemd[1700]: Reached target paths.target - Paths. Jan 30 06:16:45.809337 systemd[1700]: Reached target timers.target - Timers. Jan 30 06:16:45.810760 systemd[1700]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 06:16:45.823370 systemd[1700]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 06:16:45.823505 systemd[1700]: Reached target sockets.target - Sockets. Jan 30 06:16:45.823523 systemd[1700]: Reached target basic.target - Basic System. Jan 30 06:16:45.823569 systemd[1700]: Reached target default.target - Main User Target. Jan 30 06:16:45.823606 systemd[1700]: Startup finished in 114ms. Jan 30 06:16:45.823896 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 06:16:45.825316 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 06:16:46.524490 systemd[1]: Started sshd@3-78.47.103.36:22-139.178.89.65:53152.service - OpenSSH per-connection server daemon (139.178.89.65:53152). Jan 30 06:16:46.849712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 06:16:46.856681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:16:47.020959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:16:47.025057 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 06:16:47.056395 kubelet[1721]: E0130 06:16:47.056316 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 06:16:47.058475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 06:16:47.058677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 06:16:47.516361 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 53152 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:16:47.518082 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:16:47.523034 systemd-logind[1477]: New session 2 of user core. Jan 30 06:16:47.534354 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 06:16:48.196673 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 30 06:16:48.200712 systemd[1]: sshd@3-78.47.103.36:22-139.178.89.65:53152.service: Deactivated successfully. Jan 30 06:16:48.203091 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 06:16:48.203783 systemd-logind[1477]: Session 2 logged out. Waiting for processes to exit. Jan 30 06:16:48.204950 systemd-logind[1477]: Removed session 2. Jan 30 06:16:48.372359 systemd[1]: Started sshd@4-78.47.103.36:22-139.178.89.65:53154.service - OpenSSH per-connection server daemon (139.178.89.65:53154). Jan 30 06:16:49.354919 sshd[1733]: Accepted publickey for core from 139.178.89.65 port 53154 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:16:49.356635 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:16:49.361421 systemd-logind[1477]: New session 3 of user core. Jan 30 06:16:49.374302 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 06:16:50.037199 sshd[1733]: pam_unix(sshd:session): session closed for user core Jan 30 06:16:50.040329 systemd[1]: sshd@4-78.47.103.36:22-139.178.89.65:53154.service: Deactivated successfully. Jan 30 06:16:50.042574 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 06:16:50.044017 systemd-logind[1477]: Session 3 logged out. Waiting for processes to exit. Jan 30 06:16:50.045360 systemd-logind[1477]: Removed session 3. Jan 30 06:16:50.211367 systemd[1]: Started sshd@5-78.47.103.36:22-139.178.89.65:53170.service - OpenSSH per-connection server daemon (139.178.89.65:53170). Jan 30 06:16:51.185736 sshd[1740]: Accepted publickey for core from 139.178.89.65 port 53170 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:16:51.187503 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:16:51.192892 systemd-logind[1477]: New session 4 of user core. Jan 30 06:16:51.199488 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 06:16:51.866581 sshd[1740]: pam_unix(sshd:session): session closed for user core Jan 30 06:16:51.870509 systemd-logind[1477]: Session 4 logged out. Waiting for processes to exit. Jan 30 06:16:51.870850 systemd[1]: sshd@5-78.47.103.36:22-139.178.89.65:53170.service: Deactivated successfully. Jan 30 06:16:51.873025 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 06:16:51.873998 systemd-logind[1477]: Removed session 4. Jan 30 06:16:52.040348 systemd[1]: Started sshd@6-78.47.103.36:22-139.178.89.65:55904.service - OpenSSH per-connection server daemon (139.178.89.65:55904). Jan 30 06:16:53.003596 sshd[1747]: Accepted publickey for core from 139.178.89.65 port 55904 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:16:53.005369 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:16:53.010555 systemd-logind[1477]: New session 5 of user core. Jan 30 06:16:53.017251 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 06:16:53.529143 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 06:16:53.529533 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 06:16:53.544476 sudo[1750]: pam_unix(sudo:session): session closed for user root Jan 30 06:16:53.702284 sshd[1747]: pam_unix(sshd:session): session closed for user core Jan 30 06:16:53.705896 systemd[1]: sshd@6-78.47.103.36:22-139.178.89.65:55904.service: Deactivated successfully. Jan 30 06:16:53.708087 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 06:16:53.709921 systemd-logind[1477]: Session 5 logged out. Waiting for processes to exit. Jan 30 06:16:53.711347 systemd-logind[1477]: Removed session 5. Jan 30 06:16:53.881366 systemd[1]: Started sshd@7-78.47.103.36:22-139.178.89.65:55906.service - OpenSSH per-connection server daemon (139.178.89.65:55906). Jan 30 06:16:54.879277 sshd[1755]: Accepted publickey for core from 139.178.89.65 port 55906 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:16:54.881633 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:16:54.888560 systemd-logind[1477]: New session 6 of user core. Jan 30 06:16:54.895272 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 06:16:55.410014 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 06:16:55.410683 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 06:16:55.417555 sudo[1759]: pam_unix(sudo:session): session closed for user root Jan 30 06:16:55.428666 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 06:16:55.429414 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 06:16:55.456997 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 06:16:55.460196 auditctl[1762]: No rules Jan 30 06:16:55.460964 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 06:16:55.461380 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 06:16:55.470607 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 06:16:55.529647 augenrules[1780]: No rules Jan 30 06:16:55.532516 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 06:16:55.535070 sudo[1758]: pam_unix(sudo:session): session closed for user root Jan 30 06:16:55.695839 sshd[1755]: pam_unix(sshd:session): session closed for user core Jan 30 06:16:55.701859 systemd[1]: sshd@7-78.47.103.36:22-139.178.89.65:55906.service: Deactivated successfully. Jan 30 06:16:55.704974 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 06:16:55.707786 systemd-logind[1477]: Session 6 logged out. Waiting for processes to exit. Jan 30 06:16:55.709572 systemd-logind[1477]: Removed session 6. Jan 30 06:16:55.870172 systemd[1]: Started sshd@8-78.47.103.36:22-139.178.89.65:55910.service - OpenSSH per-connection server daemon (139.178.89.65:55910). Jan 30 06:16:56.867665 sshd[1788]: Accepted publickey for core from 139.178.89.65 port 55910 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:16:56.870461 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:16:56.878215 systemd-logind[1477]: New session 7 of user core. Jan 30 06:16:56.889336 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 06:16:57.099492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 06:16:57.106813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:16:57.281168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:16:57.285507 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 06:16:57.320906 kubelet[1799]: E0130 06:16:57.320841 1799 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 06:16:57.324884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 06:16:57.325067 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 06:16:57.394493 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 06:16:57.395071 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 06:16:57.677597 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 06:16:57.680522 (dockerd)[1825]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 06:16:57.947730 dockerd[1825]: time="2025-01-30T06:16:57.947568813Z" level=info msg="Starting up" Jan 30 06:16:58.016041 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2922153050-merged.mount: Deactivated successfully. Jan 30 06:16:58.043450 dockerd[1825]: time="2025-01-30T06:16:58.043225336Z" level=info msg="Loading containers: start." Jan 30 06:16:58.151137 kernel: Initializing XFRM netlink socket Jan 30 06:16:58.235256 systemd-networkd[1401]: docker0: Link UP Jan 30 06:16:58.259556 dockerd[1825]: time="2025-01-30T06:16:58.259512894Z" level=info msg="Loading containers: done." Jan 30 06:16:58.275169 dockerd[1825]: time="2025-01-30T06:16:58.275072898Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 06:16:58.275313 dockerd[1825]: time="2025-01-30T06:16:58.275219865Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 06:16:58.275412 dockerd[1825]: time="2025-01-30T06:16:58.275383762Z" level=info msg="Daemon has completed initialization" Jan 30 06:16:58.305229 dockerd[1825]: time="2025-01-30T06:16:58.304686638Z" level=info msg="API listen on /run/docker.sock" Jan 30 06:16:58.304889 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 06:16:59.012905 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4067518101-merged.mount: Deactivated successfully. Jan 30 06:16:59.028478 containerd[1500]: time="2025-01-30T06:16:59.028103059Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 06:16:59.707146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2112318031.mount: Deactivated successfully. Jan 30 06:17:00.988141 containerd[1500]: time="2025-01-30T06:17:00.987933346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:00.988936 containerd[1500]: time="2025-01-30T06:17:00.988853093Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674916" Jan 30 06:17:00.989710 containerd[1500]: time="2025-01-30T06:17:00.989654758Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:00.991863 containerd[1500]: time="2025-01-30T06:17:00.991830352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:00.993220 containerd[1500]: time="2025-01-30T06:17:00.992748445Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.964594241s" Jan 30 06:17:00.993220 containerd[1500]: time="2025-01-30T06:17:00.992776197Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 06:17:00.994371 containerd[1500]: time="2025-01-30T06:17:00.994233563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 06:17:01.856390 systemd[1]: Started sshd@9-78.47.103.36:22-194.0.234.37:61496.service - OpenSSH per-connection server daemon (194.0.234.37:61496). Jan 30 06:17:02.475168 containerd[1500]: time="2025-01-30T06:17:02.475098338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:02.476533 containerd[1500]: time="2025-01-30T06:17:02.476491473Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770731" Jan 30 06:17:02.476804 containerd[1500]: time="2025-01-30T06:17:02.476782850Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:02.479318 containerd[1500]: time="2025-01-30T06:17:02.479294645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:02.480632 containerd[1500]: time="2025-01-30T06:17:02.480600366Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.486342978s" Jan 30 06:17:02.480632 containerd[1500]: time="2025-01-30T06:17:02.480629581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 06:17:02.481132 containerd[1500]: time="2025-01-30T06:17:02.481086878Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 06:17:02.550286 sshd[2018]: Invalid user usuario from 194.0.234.37 port 61496 Jan 30 06:17:02.621803 sshd[2018]: Connection closed by invalid user usuario 194.0.234.37 port 61496 [preauth] Jan 30 06:17:02.624805 systemd[1]: sshd@9-78.47.103.36:22-194.0.234.37:61496.service: Deactivated successfully. Jan 30 06:17:03.620474 containerd[1500]: time="2025-01-30T06:17:03.620429538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:03.621418 containerd[1500]: time="2025-01-30T06:17:03.621269073Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169779" Jan 30 06:17:03.622172 containerd[1500]: time="2025-01-30T06:17:03.622101235Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:03.624440 containerd[1500]: time="2025-01-30T06:17:03.624403577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:03.625479 containerd[1500]: time="2025-01-30T06:17:03.625368728Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.144233169s" Jan 30 06:17:03.625479 containerd[1500]: time="2025-01-30T06:17:03.625395649Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 06:17:03.625844 containerd[1500]: time="2025-01-30T06:17:03.625813634Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 06:17:04.655421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103248648.mount: Deactivated successfully. Jan 30 06:17:04.975356 containerd[1500]: time="2025-01-30T06:17:04.975222869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:04.976266 containerd[1500]: time="2025-01-30T06:17:04.976221343Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909492" Jan 30 06:17:04.977369 containerd[1500]: time="2025-01-30T06:17:04.977337868Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:04.979539 containerd[1500]: time="2025-01-30T06:17:04.979485058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:04.980670 containerd[1500]: time="2025-01-30T06:17:04.980252239Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.354346111s" Jan 30 06:17:04.980670 containerd[1500]: time="2025-01-30T06:17:04.980289819Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 06:17:04.981213 containerd[1500]: time="2025-01-30T06:17:04.981175652Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 06:17:05.527418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530378362.mount: Deactivated successfully. Jan 30 06:17:06.199152 containerd[1500]: time="2025-01-30T06:17:06.199085551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:06.200149 containerd[1500]: time="2025-01-30T06:17:06.199980680Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565333" Jan 30 06:17:06.200990 containerd[1500]: time="2025-01-30T06:17:06.200946423Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:06.203621 containerd[1500]: time="2025-01-30T06:17:06.203579264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:06.204766 containerd[1500]: time="2025-01-30T06:17:06.204636258Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.22343083s" Jan 30 06:17:06.204766 containerd[1500]: time="2025-01-30T06:17:06.204678577Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 06:17:06.205501 containerd[1500]: time="2025-01-30T06:17:06.205468551Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 06:17:06.710299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2758533854.mount: Deactivated successfully. Jan 30 06:17:06.715902 containerd[1500]: time="2025-01-30T06:17:06.715833620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:06.716688 containerd[1500]: time="2025-01-30T06:17:06.716636337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Jan 30 06:17:06.717470 containerd[1500]: time="2025-01-30T06:17:06.717420448Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:06.719480 containerd[1500]: time="2025-01-30T06:17:06.719435169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:06.720759 containerd[1500]: time="2025-01-30T06:17:06.720595916Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 515.098432ms" Jan 30 06:17:06.720759 containerd[1500]: time="2025-01-30T06:17:06.720636262Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 06:17:06.721324 containerd[1500]: time="2025-01-30T06:17:06.721285021Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 06:17:07.303407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353167151.mount: Deactivated successfully. Jan 30 06:17:07.350281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 06:17:07.358237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:17:07.506229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:17:07.509834 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 06:17:07.542167 kubelet[2114]: E0130 06:17:07.541922 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 06:17:07.545386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 06:17:07.545565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 06:17:09.060430 containerd[1500]: time="2025-01-30T06:17:09.060379583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:09.073275 containerd[1500]: time="2025-01-30T06:17:09.073223654Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551382" Jan 30 06:17:09.074345 containerd[1500]: time="2025-01-30T06:17:09.074300174Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:09.077130 containerd[1500]: time="2025-01-30T06:17:09.077068860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:09.078362 containerd[1500]: time="2025-01-30T06:17:09.078317212Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.356992076s" Jan 30 06:17:09.078420 containerd[1500]: time="2025-01-30T06:17:09.078364952Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 06:17:11.673741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:17:11.685322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:17:11.713497 systemd[1]: Reloading requested from client PID 2192 ('systemctl') (unit session-7.scope)... Jan 30 06:17:11.713510 systemd[1]: Reloading... Jan 30 06:17:11.845196 zram_generator::config[2252]: No configuration found. Jan 30 06:17:11.924346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 06:17:11.990392 systemd[1]: Reloading finished in 276 ms. Jan 30 06:17:12.038582 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 06:17:12.038705 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 06:17:12.039214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:17:12.043430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:17:12.179629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:17:12.184240 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 06:17:12.224368 kubelet[2288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 06:17:12.224368 kubelet[2288]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 06:17:12.224368 kubelet[2288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 06:17:12.224747 kubelet[2288]: I0130 06:17:12.224430 2288 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 06:17:12.473070 kubelet[2288]: I0130 06:17:12.472966 2288 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 06:17:12.473070 kubelet[2288]: I0130 06:17:12.472997 2288 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 06:17:12.473568 kubelet[2288]: I0130 06:17:12.473543 2288 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 06:17:12.499622 kubelet[2288]: E0130 06:17:12.499580 2288 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://78.47.103.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 78.47.103.36:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:17:12.501120 kubelet[2288]: I0130 06:17:12.501072 2288 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 06:17:12.512774 kubelet[2288]: E0130 06:17:12.512746 2288 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 06:17:12.512774 kubelet[2288]: I0130 06:17:12.512772 2288 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 06:17:12.517541 kubelet[2288]: I0130 06:17:12.517516 2288 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 06:17:12.518896 kubelet[2288]: I0130 06:17:12.518859 2288 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 06:17:12.519025 kubelet[2288]: I0130 06:17:12.518893 2288 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-a-a10ab07ed7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 06:17:12.519099 kubelet[2288]: I0130 06:17:12.519027 2288 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 06:17:12.519099 kubelet[2288]: I0130 06:17:12.519035 2288 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 06:17:12.519170 kubelet[2288]: I0130 06:17:12.519157 2288 state_mem.go:36] "Initialized new in-memory state store" Jan 30 06:17:12.521965 kubelet[2288]: I0130 06:17:12.521946 2288 kubelet.go:446] "Attempting to sync node with API server" Jan 30 06:17:12.522017 kubelet[2288]: I0130 06:17:12.521970 2288 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 06:17:12.522017 kubelet[2288]: I0130 06:17:12.521987 2288 kubelet.go:352] "Adding apiserver pod source" Jan 30 06:17:12.522017 kubelet[2288]: I0130 06:17:12.521995 2288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 06:17:12.527056 kubelet[2288]: W0130 06:17:12.526772 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.103.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 78.47.103.36:6443: connect: connection refused Jan 30 06:17:12.527056 kubelet[2288]: E0130 06:17:12.526822 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://78.47.103.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 78.47.103.36:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:17:12.527056 kubelet[2288]: W0130 06:17:12.526874 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.103.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-a-a10ab07ed7&limit=500&resourceVersion=0": dial tcp 78.47.103.36:6443: connect: connection refused Jan 30 06:17:12.527056 kubelet[2288]: E0130 06:17:12.526897 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://78.47.103.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-a-a10ab07ed7&limit=500&resourceVersion=0\": dial tcp 78.47.103.36:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:17:12.527264 kubelet[2288]: I0130 06:17:12.527243 2288 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 06:17:12.529720 kubelet[2288]: I0130 06:17:12.529704 2288 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 06:17:12.530726 kubelet[2288]: W0130 06:17:12.530711 2288 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 06:17:12.532760 kubelet[2288]: I0130 06:17:12.532354 2288 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 06:17:12.532760 kubelet[2288]: I0130 06:17:12.532381 2288 server.go:1287] "Started kubelet" Jan 30 06:17:12.537469 kubelet[2288]: I0130 06:17:12.537455 2288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 06:17:12.540219 kubelet[2288]: E0130 06:17:12.536509 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.103.36:6443/api/v1/namespaces/default/events\": dial tcp 78.47.103.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-a-a10ab07ed7.181f63e8c2575509 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-a-a10ab07ed7,UID:ci-4081-3-0-a-a10ab07ed7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-a-a10ab07ed7,},FirstTimestamp:2025-01-30 06:17:12.532366601 +0000 UTC m=+0.344377581,LastTimestamp:2025-01-30 06:17:12.532366601 +0000 UTC m=+0.344377581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-a-a10ab07ed7,}" Jan 30 06:17:12.543959 kubelet[2288]: I0130 06:17:12.543947 2288 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 06:17:12.544234 kubelet[2288]: E0130 06:17:12.544206 2288 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-a-a10ab07ed7\" not found" Jan 30 06:17:12.545870 kubelet[2288]: I0130 06:17:12.545432 2288 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 06:17:12.547156 kubelet[2288]: I0130 06:17:12.547142 2288 server.go:490] "Adding debug handlers to kubelet server" Jan 30 06:17:12.547835 kubelet[2288]: I0130 06:17:12.547800 2288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 06:17:12.548137 kubelet[2288]: I0130 06:17:12.548068 2288 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 06:17:12.549648 kubelet[2288]: I0130 06:17:12.548942 2288 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 06:17:12.549648 kubelet[2288]: E0130 06:17:12.549090 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.103.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-a-a10ab07ed7?timeout=10s\": dial tcp 78.47.103.36:6443: connect: connection refused" interval="200ms" Jan 30 06:17:12.549648 kubelet[2288]: I0130 06:17:12.549244 2288 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 06:17:12.549648 kubelet[2288]: I0130 06:17:12.549270 2288 reconciler.go:26] "Reconciler: start to sync state" Jan 30 06:17:12.549648 kubelet[2288]: W0130 06:17:12.549577 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.103.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.103.36:6443: connect: connection refused Jan 30 06:17:12.549648 kubelet[2288]: E0130 06:17:12.549624 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://78.47.103.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 78.47.103.36:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:17:12.551089 kubelet[2288]: I0130 06:17:12.551071 2288 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 06:17:12.553142 kubelet[2288]: I0130 06:17:12.552175 2288 factory.go:221] Registration of the containerd container factory successfully Jan 30 06:17:12.553142 kubelet[2288]: I0130 06:17:12.552187 2288 factory.go:221] Registration of the systemd container factory successfully Jan 30 06:17:12.562784 kubelet[2288]: E0130 06:17:12.562766 2288 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 06:17:12.562918 kubelet[2288]: I0130 06:17:12.562900 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 06:17:12.564211 kubelet[2288]: I0130 06:17:12.564196 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 06:17:12.564278 kubelet[2288]: I0130 06:17:12.564268 2288 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 06:17:12.564337 kubelet[2288]: I0130 06:17:12.564327 2288 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 06:17:12.564390 kubelet[2288]: I0130 06:17:12.564380 2288 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 06:17:12.564479 kubelet[2288]: E0130 06:17:12.564456 2288 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 06:17:12.571195 kubelet[2288]: W0130 06:17:12.571152 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.103.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.103.36:6443: connect: connection refused Jan 30 06:17:12.571279 kubelet[2288]: E0130 06:17:12.571263 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://78.47.103.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 78.47.103.36:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:17:12.588028 kubelet[2288]: I0130 06:17:12.588009 2288 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 06:17:12.588028 kubelet[2288]: I0130 06:17:12.588023 2288 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 06:17:12.588175 kubelet[2288]: I0130 06:17:12.588037 2288 state_mem.go:36] "Initialized new in-memory state store" Jan 30 06:17:12.589961 kubelet[2288]: I0130 06:17:12.589940 2288 policy_none.go:49] "None policy: Start" Jan 30 06:17:12.589961 kubelet[2288]: I0130 06:17:12.589958 2288 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 06:17:12.589961 kubelet[2288]: I0130 06:17:12.589968 2288 state_mem.go:35] "Initializing new in-memory state store" Jan 30 06:17:12.595587 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 06:17:12.605262 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 06:17:12.608295 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 06:17:12.615850 kubelet[2288]: I0130 06:17:12.615827 2288 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 06:17:12.616201 kubelet[2288]: I0130 06:17:12.615996 2288 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 06:17:12.616201 kubelet[2288]: I0130 06:17:12.616005 2288 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 06:17:12.616270 kubelet[2288]: I0130 06:17:12.616209 2288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 06:17:12.617725 kubelet[2288]: E0130 06:17:12.617540 2288 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 06:17:12.617793 kubelet[2288]: E0130 06:17:12.617740 2288 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-a-a10ab07ed7\" not found" Jan 30 06:17:12.674965 systemd[1]: Created slice kubepods-burstable-podce9b9d4d856568bd0e8f4ea187be8b39.slice - libcontainer container kubepods-burstable-podce9b9d4d856568bd0e8f4ea187be8b39.slice. Jan 30 06:17:12.685976 kubelet[2288]: E0130 06:17:12.685956 2288 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-a-a10ab07ed7\" not found" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.689322 systemd[1]: Created slice kubepods-burstable-pode26781bb4acd88f9d14a0b3037b84072.slice - libcontainer container kubepods-burstable-pode26781bb4acd88f9d14a0b3037b84072.slice. Jan 30 06:17:12.691506 kubelet[2288]: E0130 06:17:12.691340 2288 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-a-a10ab07ed7\" not found" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.693644 systemd[1]: Created slice kubepods-burstable-pod3a11d169335304ce493b4c8fd0d63511.slice - libcontainer container kubepods-burstable-pod3a11d169335304ce493b4c8fd0d63511.slice. Jan 30 06:17:12.695145 kubelet[2288]: E0130 06:17:12.695126 2288 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-a-a10ab07ed7\" not found" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.719131 kubelet[2288]: I0130 06:17:12.719032 2288 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.719635 kubelet[2288]: E0130 06:17:12.719512 2288 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://78.47.103.36:6443/api/v1/nodes\": dial tcp 78.47.103.36:6443: connect: connection refused" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.750632 kubelet[2288]: I0130 06:17:12.750179 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a11d169335304ce493b4c8fd0d63511-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-a-a10ab07ed7\" (UID: \"3a11d169335304ce493b4c8fd0d63511\") " pod="kube-system/kube-scheduler-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.750632 kubelet[2288]: I0130 06:17:12.750224 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce9b9d4d856568bd0e8f4ea187be8b39-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-a-a10ab07ed7\" (UID: \"ce9b9d4d856568bd0e8f4ea187be8b39\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.750632 kubelet[2288]: E0130 06:17:12.750342 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.103.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-a-a10ab07ed7?timeout=10s\": dial tcp 78.47.103.36:6443: connect: connection refused" interval="400ms" Jan 30 06:17:12.750632 kubelet[2288]: I0130 06:17:12.750516 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce9b9d4d856568bd0e8f4ea187be8b39-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-a-a10ab07ed7\" (UID: \"ce9b9d4d856568bd0e8f4ea187be8b39\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.750632 kubelet[2288]: I0130 06:17:12.750548 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce9b9d4d856568bd0e8f4ea187be8b39-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-a-a10ab07ed7\" (UID: \"ce9b9d4d856568bd0e8f4ea187be8b39\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.851566 kubelet[2288]: I0130 06:17:12.851450 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e26781bb4acd88f9d14a0b3037b84072-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-a-a10ab07ed7\" (UID: \"e26781bb4acd88f9d14a0b3037b84072\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.851566 kubelet[2288]: I0130 06:17:12.851527 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e26781bb4acd88f9d14a0b3037b84072-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-a-a10ab07ed7\" (UID: \"e26781bb4acd88f9d14a0b3037b84072\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.851874 kubelet[2288]: I0130 06:17:12.851650 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e26781bb4acd88f9d14a0b3037b84072-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-a-a10ab07ed7\" (UID: \"e26781bb4acd88f9d14a0b3037b84072\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.851874 kubelet[2288]: I0130 06:17:12.851704 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e26781bb4acd88f9d14a0b3037b84072-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-a-a10ab07ed7\" (UID: \"e26781bb4acd88f9d14a0b3037b84072\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.851874 kubelet[2288]: I0130 06:17:12.851739 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e26781bb4acd88f9d14a0b3037b84072-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-a-a10ab07ed7\" (UID: \"e26781bb4acd88f9d14a0b3037b84072\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.923715 kubelet[2288]: I0130 06:17:12.923518 2288 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.924144 kubelet[2288]: E0130 06:17:12.924061 2288 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://78.47.103.36:6443/api/v1/nodes\": dial tcp 78.47.103.36:6443: connect: connection refused" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:12.988016 containerd[1500]: time="2025-01-30T06:17:12.987922849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-a-a10ab07ed7,Uid:ce9b9d4d856568bd0e8f4ea187be8b39,Namespace:kube-system,Attempt:0,}" Jan 30 06:17:12.995475 containerd[1500]: time="2025-01-30T06:17:12.995410217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-a-a10ab07ed7,Uid:e26781bb4acd88f9d14a0b3037b84072,Namespace:kube-system,Attempt:0,}" Jan 30 06:17:12.996577 containerd[1500]: time="2025-01-30T06:17:12.996449919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-a-a10ab07ed7,Uid:3a11d169335304ce493b4c8fd0d63511,Namespace:kube-system,Attempt:0,}" Jan 30 06:17:13.151790 kubelet[2288]: E0130 06:17:13.151694 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.103.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-a-a10ab07ed7?timeout=10s\": dial tcp 78.47.103.36:6443: connect: connection refused" interval="800ms" Jan 30 06:17:13.326963 kubelet[2288]: I0130 06:17:13.326916 2288 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:13.329145 kubelet[2288]: E0130 06:17:13.327711 2288 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://78.47.103.36:6443/api/v1/nodes\": dial tcp 78.47.103.36:6443: connect: connection refused" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:13.490225 kubelet[2288]: W0130 06:17:13.489871 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.103.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 78.47.103.36:6443: connect: connection refused Jan 30 06:17:13.490637 kubelet[2288]: E0130 06:17:13.490578 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://78.47.103.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 78.47.103.36:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:17:13.492098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880402786.mount: Deactivated successfully. Jan 30 06:17:13.498047 containerd[1500]: time="2025-01-30T06:17:13.497997023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 06:17:13.499175 containerd[1500]: time="2025-01-30T06:17:13.499130580Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 06:17:13.499945 containerd[1500]: time="2025-01-30T06:17:13.499903020Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 06:17:13.500456 containerd[1500]: time="2025-01-30T06:17:13.500386897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 06:17:13.501001 containerd[1500]: time="2025-01-30T06:17:13.500965063Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 06:17:13.501707 containerd[1500]: time="2025-01-30T06:17:13.501670817Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 06:17:13.502449 containerd[1500]: time="2025-01-30T06:17:13.502412178Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 30 06:17:13.504495 containerd[1500]: time="2025-01-30T06:17:13.504422589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 06:17:13.506709 containerd[1500]: time="2025-01-30T06:17:13.506474570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.399385ms" Jan 30 06:17:13.509149 containerd[1500]: time="2025-01-30T06:17:13.507911937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.421007ms" Jan 30 06:17:13.510136 containerd[1500]: time="2025-01-30T06:17:13.510073944Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.549686ms" Jan 30 06:17:13.545981 kubelet[2288]: W0130 06:17:13.545890 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.103.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-a-a10ab07ed7&limit=500&resourceVersion=0": dial tcp 78.47.103.36:6443: connect: connection refused Jan 30 06:17:13.546232 kubelet[2288]: E0130 06:17:13.546198 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://78.47.103.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-a-a10ab07ed7&limit=500&resourceVersion=0\": dial tcp 78.47.103.36:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:17:13.660168 containerd[1500]: time="2025-01-30T06:17:13.659068927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:13.660168 containerd[1500]: time="2025-01-30T06:17:13.659168784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:13.660168 containerd[1500]: time="2025-01-30T06:17:13.659195735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:13.660168 containerd[1500]: time="2025-01-30T06:17:13.659294951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:13.665402 containerd[1500]: time="2025-01-30T06:17:13.665346716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:13.666227 containerd[1500]: time="2025-01-30T06:17:13.666190429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:13.667087 containerd[1500]: time="2025-01-30T06:17:13.667056354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:13.667270 containerd[1500]: time="2025-01-30T06:17:13.667238876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:13.667514 containerd[1500]: time="2025-01-30T06:17:13.667469599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:13.667621 containerd[1500]: time="2025-01-30T06:17:13.667579866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:13.667762 containerd[1500]: time="2025-01-30T06:17:13.667738704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:13.667907 containerd[1500]: time="2025-01-30T06:17:13.667883466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:13.684403 systemd[1]: Started cri-containerd-82bdf0637c2ded06c734658f8a388cf9ce1f47bdadcbb62f3221ff9a9f1d1fa1.scope - libcontainer container 82bdf0637c2ded06c734658f8a388cf9ce1f47bdadcbb62f3221ff9a9f1d1fa1. Jan 30 06:17:13.702402 systemd[1]: Started cri-containerd-2c864db364196448de026394051e656501013b3dc73f62607510baed77f79b82.scope - libcontainer container 2c864db364196448de026394051e656501013b3dc73f62607510baed77f79b82. Jan 30 06:17:13.704671 systemd[1]: Started cri-containerd-d8b6adcad9b2f3e0e1ad4ea1fbcedb8b188c11098d43fb36a608c4a214a20248.scope - libcontainer container d8b6adcad9b2f3e0e1ad4ea1fbcedb8b188c11098d43fb36a608c4a214a20248. Jan 30 06:17:13.752604 containerd[1500]: time="2025-01-30T06:17:13.751770630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-a-a10ab07ed7,Uid:3a11d169335304ce493b4c8fd0d63511,Namespace:kube-system,Attempt:0,} returns sandbox id \"82bdf0637c2ded06c734658f8a388cf9ce1f47bdadcbb62f3221ff9a9f1d1fa1\"" Jan 30 06:17:13.759504 containerd[1500]: time="2025-01-30T06:17:13.759360521Z" level=info msg="CreateContainer within sandbox \"82bdf0637c2ded06c734658f8a388cf9ce1f47bdadcbb62f3221ff9a9f1d1fa1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 06:17:13.771153 containerd[1500]: time="2025-01-30T06:17:13.771072316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-a-a10ab07ed7,Uid:ce9b9d4d856568bd0e8f4ea187be8b39,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c864db364196448de026394051e656501013b3dc73f62607510baed77f79b82\"" Jan 30 06:17:13.773683 containerd[1500]: time="2025-01-30T06:17:13.773642788Z" level=info msg="CreateContainer within sandbox \"2c864db364196448de026394051e656501013b3dc73f62607510baed77f79b82\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 06:17:13.776725 containerd[1500]: time="2025-01-30T06:17:13.776527571Z" level=info msg="CreateContainer within sandbox \"82bdf0637c2ded06c734658f8a388cf9ce1f47bdadcbb62f3221ff9a9f1d1fa1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9d35260fb9aa6fd0f546d80e5518ae771211c7c1e54b935a6c9c33ebb8c0863f\"" Jan 30 06:17:13.776725 containerd[1500]: time="2025-01-30T06:17:13.776682792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-a-a10ab07ed7,Uid:e26781bb4acd88f9d14a0b3037b84072,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8b6adcad9b2f3e0e1ad4ea1fbcedb8b188c11098d43fb36a608c4a214a20248\"" Jan 30 06:17:13.777295 containerd[1500]: time="2025-01-30T06:17:13.777278149Z" level=info msg="StartContainer for \"9d35260fb9aa6fd0f546d80e5518ae771211c7c1e54b935a6c9c33ebb8c0863f\"" Jan 30 06:17:13.779801 containerd[1500]: time="2025-01-30T06:17:13.779684583Z" level=info msg="CreateContainer within sandbox \"d8b6adcad9b2f3e0e1ad4ea1fbcedb8b188c11098d43fb36a608c4a214a20248\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 06:17:13.785422 containerd[1500]: time="2025-01-30T06:17:13.785389759Z" level=info msg="CreateContainer within sandbox \"2c864db364196448de026394051e656501013b3dc73f62607510baed77f79b82\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d68b1ab52f198ddf05c9e11b54cd6a21c050a1c929ba7f8d20b5a12d2efdb543\"" Jan 30 06:17:13.786277 containerd[1500]: time="2025-01-30T06:17:13.786248219Z" level=info msg="StartContainer for \"d68b1ab52f198ddf05c9e11b54cd6a21c050a1c929ba7f8d20b5a12d2efdb543\"" Jan 30 06:17:13.800174 containerd[1500]: time="2025-01-30T06:17:13.799623616Z" level=info msg="CreateContainer within sandbox \"d8b6adcad9b2f3e0e1ad4ea1fbcedb8b188c11098d43fb36a608c4a214a20248\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a02ba7abdff144a8d6433537c14d5db7261e19b941d8d688a53e1cf7b1c6269f\"" Jan 30 06:17:13.800572 containerd[1500]: time="2025-01-30T06:17:13.800543601Z" level=info msg="StartContainer for \"a02ba7abdff144a8d6433537c14d5db7261e19b941d8d688a53e1cf7b1c6269f\"" Jan 30 06:17:13.810577 systemd[1]: Started cri-containerd-9d35260fb9aa6fd0f546d80e5518ae771211c7c1e54b935a6c9c33ebb8c0863f.scope - libcontainer container 9d35260fb9aa6fd0f546d80e5518ae771211c7c1e54b935a6c9c33ebb8c0863f. Jan 30 06:17:13.836245 systemd[1]: Started cri-containerd-d68b1ab52f198ddf05c9e11b54cd6a21c050a1c929ba7f8d20b5a12d2efdb543.scope - libcontainer container d68b1ab52f198ddf05c9e11b54cd6a21c050a1c929ba7f8d20b5a12d2efdb543. Jan 30 06:17:13.840101 systemd[1]: Started cri-containerd-a02ba7abdff144a8d6433537c14d5db7261e19b941d8d688a53e1cf7b1c6269f.scope - libcontainer container a02ba7abdff144a8d6433537c14d5db7261e19b941d8d688a53e1cf7b1c6269f. Jan 30 06:17:13.871566 containerd[1500]: time="2025-01-30T06:17:13.871364130Z" level=info msg="StartContainer for \"9d35260fb9aa6fd0f546d80e5518ae771211c7c1e54b935a6c9c33ebb8c0863f\" returns successfully" Jan 30 06:17:13.883596 kubelet[2288]: W0130 06:17:13.883527 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.103.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.103.36:6443: connect: connection refused Jan 30 06:17:13.883761 kubelet[2288]: E0130 06:17:13.883609 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://78.47.103.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 78.47.103.36:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:17:13.908922 containerd[1500]: time="2025-01-30T06:17:13.908876913Z" level=info msg="StartContainer for \"d68b1ab52f198ddf05c9e11b54cd6a21c050a1c929ba7f8d20b5a12d2efdb543\" returns successfully" Jan 30 06:17:13.909073 containerd[1500]: time="2025-01-30T06:17:13.908945452Z" level=info msg="StartContainer for \"a02ba7abdff144a8d6433537c14d5db7261e19b941d8d688a53e1cf7b1c6269f\" returns successfully" Jan 30 06:17:13.952635 kubelet[2288]: E0130 06:17:13.952561 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.103.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-a-a10ab07ed7?timeout=10s\": dial tcp 78.47.103.36:6443: connect: connection refused" interval="1.6s" Jan 30 06:17:14.033636 kubelet[2288]: W0130 06:17:14.033500 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.103.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.103.36:6443: connect: connection refused Jan 30 06:17:14.033636 kubelet[2288]: E0130 06:17:14.033565 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://78.47.103.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 78.47.103.36:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:17:14.130317 kubelet[2288]: I0130 06:17:14.130288 2288 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:14.130702 kubelet[2288]: E0130 06:17:14.130675 2288 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://78.47.103.36:6443/api/v1/nodes\": dial tcp 78.47.103.36:6443: connect: connection refused" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:14.594681 kubelet[2288]: E0130 06:17:14.594643 2288 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-a-a10ab07ed7\" not found" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:14.596971 kubelet[2288]: E0130 06:17:14.596946 2288 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-a-a10ab07ed7\" not found" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:14.598903 kubelet[2288]: E0130 06:17:14.598881 2288 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-a-a10ab07ed7\" not found" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:15.503269 kubelet[2288]: E0130 06:17:15.503204 2288 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-3-0-a-a10ab07ed7" not found Jan 30 06:17:15.557629 kubelet[2288]: E0130 06:17:15.557527 2288 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-a-a10ab07ed7\" not found" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:15.605013 kubelet[2288]: E0130 06:17:15.604677 2288 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-a-a10ab07ed7\" not found" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:15.605013 kubelet[2288]: E0130 06:17:15.604722 2288 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-a-a10ab07ed7\" not found" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:15.733260 kubelet[2288]: I0130 06:17:15.733193 2288 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:15.745564 kubelet[2288]: I0130 06:17:15.745359 2288 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:15.745564 kubelet[2288]: E0130 06:17:15.745400 2288 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4081-3-0-a-a10ab07ed7\": node \"ci-4081-3-0-a-a10ab07ed7\" not found" Jan 30 06:17:15.745564 kubelet[2288]: I0130 06:17:15.745436 2288 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:15.753707 kubelet[2288]: E0130 06:17:15.753612 2288 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-a-a10ab07ed7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:15.753707 kubelet[2288]: I0130 06:17:15.753634 2288 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:15.755611 kubelet[2288]: E0130 06:17:15.755581 2288 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-0-a-a10ab07ed7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:15.755611 kubelet[2288]: I0130 06:17:15.755605 2288 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:15.756835 kubelet[2288]: E0130 06:17:15.756794 2288 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-a-a10ab07ed7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:16.526864 kubelet[2288]: I0130 06:17:16.526790 2288 apiserver.go:52] "Watching apiserver" Jan 30 06:17:16.549772 kubelet[2288]: I0130 06:17:16.549719 2288 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 06:17:16.603783 kubelet[2288]: I0130 06:17:16.603720 2288 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.136622 systemd[1]: Reloading requested from client PID 2562 ('systemctl') (unit session-7.scope)... Jan 30 06:17:17.136640 systemd[1]: Reloading... Jan 30 06:17:17.228150 zram_generator::config[2604]: No configuration found. Jan 30 06:17:17.302455 kubelet[2288]: I0130 06:17:17.302231 2288 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.325823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 06:17:17.402873 systemd[1]: Reloading finished in 265 ms. Jan 30 06:17:17.446654 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:17:17.461486 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 06:17:17.461716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:17:17.465395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 06:17:17.612034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 06:17:17.618281 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 06:17:17.669463 kubelet[2655]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 06:17:17.669463 kubelet[2655]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 06:17:17.669463 kubelet[2655]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 06:17:17.670516 kubelet[2655]: I0130 06:17:17.670476 2655 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 06:17:17.676894 kubelet[2655]: I0130 06:17:17.676869 2655 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 06:17:17.676894 kubelet[2655]: I0130 06:17:17.676886 2655 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 06:17:17.680160 kubelet[2655]: I0130 06:17:17.680140 2655 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 06:17:17.682241 kubelet[2655]: I0130 06:17:17.682221 2655 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 06:17:17.686177 kubelet[2655]: I0130 06:17:17.686137 2655 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 06:17:17.693076 kubelet[2655]: E0130 06:17:17.693038 2655 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 06:17:17.693076 kubelet[2655]: I0130 06:17:17.693067 2655 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 06:17:17.695983 kubelet[2655]: I0130 06:17:17.695958 2655 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 06:17:17.696221 kubelet[2655]: I0130 06:17:17.696184 2655 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 06:17:17.696402 kubelet[2655]: I0130 06:17:17.696218 2655 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-a-a10ab07ed7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 06:17:17.696477 kubelet[2655]: I0130 06:17:17.696408 2655 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 06:17:17.696477 kubelet[2655]: I0130 06:17:17.696422 2655 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 06:17:17.696477 kubelet[2655]: I0130 06:17:17.696459 2655 state_mem.go:36] "Initialized new in-memory state store" Jan 30 06:17:17.696741 kubelet[2655]: I0130 06:17:17.696626 2655 kubelet.go:446] "Attempting to sync node with API server" Jan 30 06:17:17.696741 kubelet[2655]: I0130 06:17:17.696640 2655 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 06:17:17.696741 kubelet[2655]: I0130 06:17:17.696655 2655 kubelet.go:352] "Adding apiserver pod source" Jan 30 06:17:17.696741 kubelet[2655]: I0130 06:17:17.696663 2655 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 06:17:17.710141 kubelet[2655]: I0130 06:17:17.709572 2655 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 06:17:17.710141 kubelet[2655]: I0130 06:17:17.709856 2655 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 06:17:17.711592 kubelet[2655]: I0130 06:17:17.711561 2655 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 06:17:17.711646 kubelet[2655]: I0130 06:17:17.711597 2655 server.go:1287] "Started kubelet" Jan 30 06:17:17.713413 kubelet[2655]: I0130 06:17:17.711707 2655 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 06:17:17.713413 kubelet[2655]: I0130 06:17:17.712634 2655 server.go:490] "Adding debug handlers to kubelet server" Jan 30 06:17:17.714949 kubelet[2655]: I0130 06:17:17.714593 2655 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 06:17:17.714949 kubelet[2655]: I0130 06:17:17.714776 2655 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 06:17:17.715872 kubelet[2655]: I0130 06:17:17.715850 2655 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 06:17:17.717187 kubelet[2655]: I0130 06:17:17.717099 2655 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 06:17:17.720685 kubelet[2655]: I0130 06:17:17.720673 2655 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 06:17:17.720850 kubelet[2655]: I0130 06:17:17.720819 2655 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 06:17:17.721041 kubelet[2655]: I0130 06:17:17.721030 2655 reconciler.go:26] "Reconciler: start to sync state" Jan 30 06:17:17.724948 kubelet[2655]: I0130 06:17:17.724341 2655 factory.go:221] Registration of the systemd container factory successfully Jan 30 06:17:17.724948 kubelet[2655]: I0130 06:17:17.724412 2655 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 06:17:17.726232 kubelet[2655]: E0130 06:17:17.726216 2655 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 06:17:17.726833 kubelet[2655]: I0130 06:17:17.726821 2655 factory.go:221] Registration of the containerd container factory successfully Jan 30 06:17:17.731156 kubelet[2655]: I0130 06:17:17.731131 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 06:17:17.732729 kubelet[2655]: I0130 06:17:17.732701 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 06:17:17.732729 kubelet[2655]: I0130 06:17:17.732726 2655 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 06:17:17.732795 kubelet[2655]: I0130 06:17:17.732740 2655 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 06:17:17.732795 kubelet[2655]: I0130 06:17:17.732746 2655 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 06:17:17.732795 kubelet[2655]: E0130 06:17:17.732787 2655 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 06:17:17.769589 kubelet[2655]: I0130 06:17:17.769549 2655 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 06:17:17.770464 kubelet[2655]: I0130 06:17:17.769768 2655 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 06:17:17.770464 kubelet[2655]: I0130 06:17:17.769788 2655 state_mem.go:36] "Initialized new in-memory state store" Jan 30 06:17:17.770464 kubelet[2655]: I0130 06:17:17.769917 2655 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 06:17:17.770464 kubelet[2655]: I0130 06:17:17.769927 2655 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 06:17:17.770464 kubelet[2655]: I0130 06:17:17.769943 2655 policy_none.go:49] "None policy: Start" Jan 30 06:17:17.770464 kubelet[2655]: I0130 06:17:17.769951 2655 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 06:17:17.770464 kubelet[2655]: I0130 06:17:17.769960 2655 state_mem.go:35] "Initializing new in-memory state store" Jan 30 06:17:17.770464 kubelet[2655]: I0130 06:17:17.770048 2655 state_mem.go:75] "Updated machine memory state" Jan 30 06:17:17.773889 kubelet[2655]: I0130 06:17:17.773867 2655 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 06:17:17.774301 kubelet[2655]: I0130 06:17:17.774015 2655 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 06:17:17.774301 kubelet[2655]: I0130 06:17:17.774027 2655 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 06:17:17.777124 kubelet[2655]: I0130 06:17:17.776146 2655 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 06:17:17.778180 kubelet[2655]: E0130 06:17:17.777665 2655 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 06:17:17.833854 kubelet[2655]: I0130 06:17:17.833801 2655 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.834812 kubelet[2655]: I0130 06:17:17.834785 2655 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.835034 kubelet[2655]: I0130 06:17:17.835010 2655 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.843614 kubelet[2655]: E0130 06:17:17.843548 2655 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-a-a10ab07ed7\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.844365 kubelet[2655]: E0130 06:17:17.844334 2655 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-a-a10ab07ed7\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.879480 kubelet[2655]: I0130 06:17:17.879452 2655 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.888030 kubelet[2655]: I0130 06:17:17.887954 2655 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.888166 kubelet[2655]: I0130 06:17:17.888134 2655 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.922872 kubelet[2655]: I0130 06:17:17.922412 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce9b9d4d856568bd0e8f4ea187be8b39-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-a-a10ab07ed7\" (UID: \"ce9b9d4d856568bd0e8f4ea187be8b39\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.922872 kubelet[2655]: I0130 06:17:17.922463 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e26781bb4acd88f9d14a0b3037b84072-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-a-a10ab07ed7\" (UID: \"e26781bb4acd88f9d14a0b3037b84072\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.922872 kubelet[2655]: I0130 06:17:17.922492 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce9b9d4d856568bd0e8f4ea187be8b39-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-a-a10ab07ed7\" (UID: \"ce9b9d4d856568bd0e8f4ea187be8b39\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.922872 kubelet[2655]: I0130 06:17:17.922513 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce9b9d4d856568bd0e8f4ea187be8b39-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-a-a10ab07ed7\" (UID: \"ce9b9d4d856568bd0e8f4ea187be8b39\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.922872 kubelet[2655]: I0130 06:17:17.922534 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e26781bb4acd88f9d14a0b3037b84072-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-a-a10ab07ed7\" (UID: \"e26781bb4acd88f9d14a0b3037b84072\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.923177 kubelet[2655]: I0130 06:17:17.922570 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e26781bb4acd88f9d14a0b3037b84072-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-a-a10ab07ed7\" (UID: \"e26781bb4acd88f9d14a0b3037b84072\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.923177 kubelet[2655]: I0130 06:17:17.922591 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a11d169335304ce493b4c8fd0d63511-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-a-a10ab07ed7\" (UID: \"3a11d169335304ce493b4c8fd0d63511\") " pod="kube-system/kube-scheduler-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.923177 kubelet[2655]: I0130 06:17:17.922638 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e26781bb4acd88f9d14a0b3037b84072-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-a-a10ab07ed7\" (UID: \"e26781bb4acd88f9d14a0b3037b84072\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:17.923177 kubelet[2655]: I0130 06:17:17.922685 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e26781bb4acd88f9d14a0b3037b84072-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-a-a10ab07ed7\" (UID: \"e26781bb4acd88f9d14a0b3037b84072\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:18.699663 kubelet[2655]: I0130 06:17:18.699620 2655 apiserver.go:52] "Watching apiserver" Jan 30 06:17:18.721495 kubelet[2655]: I0130 06:17:18.721433 2655 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 06:17:18.758724 kubelet[2655]: I0130 06:17:18.758235 2655 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:18.766011 kubelet[2655]: E0130 06:17:18.765953 2655 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-a-a10ab07ed7\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:18.792145 kubelet[2655]: I0130 06:17:18.792063 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-a-a10ab07ed7" podStartSLOduration=2.7920481280000002 podStartE2EDuration="2.792048128s" podCreationTimestamp="2025-01-30 06:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 06:17:18.791416945 +0000 UTC m=+1.166987681" watchObservedRunningTime="2025-01-30 06:17:18.792048128 +0000 UTC m=+1.167618865" Jan 30 06:17:18.809331 kubelet[2655]: I0130 06:17:18.809194 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-a-a10ab07ed7" podStartSLOduration=1.809179237 podStartE2EDuration="1.809179237s" podCreationTimestamp="2025-01-30 06:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 06:17:18.808474475 +0000 UTC m=+1.184045221" watchObservedRunningTime="2025-01-30 06:17:18.809179237 +0000 UTC m=+1.184749973" Jan 30 06:17:18.836615 kubelet[2655]: I0130 06:17:18.836438 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-a-a10ab07ed7" podStartSLOduration=1.836422517 podStartE2EDuration="1.836422517s" podCreationTimestamp="2025-01-30 06:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 06:17:18.820009736 +0000 UTC m=+1.195580472" watchObservedRunningTime="2025-01-30 06:17:18.836422517 +0000 UTC m=+1.211993263" Jan 30 06:17:22.867124 sudo[1807]: pam_unix(sudo:session): session closed for user root Jan 30 06:17:22.882838 kubelet[2655]: I0130 06:17:22.882798 2655 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 06:17:22.883370 containerd[1500]: time="2025-01-30T06:17:22.883250379Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 06:17:22.883632 kubelet[2655]: I0130 06:17:22.883447 2655 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 06:17:23.034524 sshd[1788]: pam_unix(sshd:session): session closed for user core Jan 30 06:17:23.039711 systemd[1]: sshd@8-78.47.103.36:22-139.178.89.65:55910.service: Deactivated successfully. Jan 30 06:17:23.041916 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 06:17:23.042323 systemd[1]: session-7.scope: Consumed 4.232s CPU time, 141.2M memory peak, 0B memory swap peak. Jan 30 06:17:23.043025 systemd-logind[1477]: Session 7 logged out. Waiting for processes to exit. Jan 30 06:17:23.044454 systemd-logind[1477]: Removed session 7. Jan 30 06:17:23.720727 systemd[1]: Created slice kubepods-besteffort-poda569ed59_f323_4992_baf8_8c7646001586.slice - libcontainer container kubepods-besteffort-poda569ed59_f323_4992_baf8_8c7646001586.slice. Jan 30 06:17:23.761310 kubelet[2655]: I0130 06:17:23.761260 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a569ed59-f323-4992-baf8-8c7646001586-xtables-lock\") pod \"kube-proxy-c8pq9\" (UID: \"a569ed59-f323-4992-baf8-8c7646001586\") " pod="kube-system/kube-proxy-c8pq9" Jan 30 06:17:23.761310 kubelet[2655]: I0130 06:17:23.761292 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a569ed59-f323-4992-baf8-8c7646001586-lib-modules\") pod \"kube-proxy-c8pq9\" (UID: \"a569ed59-f323-4992-baf8-8c7646001586\") " pod="kube-system/kube-proxy-c8pq9" Jan 30 06:17:23.761310 kubelet[2655]: I0130 06:17:23.761308 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a569ed59-f323-4992-baf8-8c7646001586-kube-proxy\") pod \"kube-proxy-c8pq9\" (UID: \"a569ed59-f323-4992-baf8-8c7646001586\") " pod="kube-system/kube-proxy-c8pq9" Jan 30 06:17:23.761522 kubelet[2655]: I0130 06:17:23.761321 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zq7l\" (UniqueName: \"kubernetes.io/projected/a569ed59-f323-4992-baf8-8c7646001586-kube-api-access-2zq7l\") pod \"kube-proxy-c8pq9\" (UID: \"a569ed59-f323-4992-baf8-8c7646001586\") " pod="kube-system/kube-proxy-c8pq9" Jan 30 06:17:23.891718 systemd[1]: Created slice kubepods-besteffort-pod87e5e8ed_54a5_482b_b7b5_b932768f3a98.slice - libcontainer container kubepods-besteffort-pod87e5e8ed_54a5_482b_b7b5_b932768f3a98.slice. Jan 30 06:17:23.962898 kubelet[2655]: I0130 06:17:23.962834 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/87e5e8ed-54a5-482b-b7b5-b932768f3a98-var-lib-calico\") pod \"tigera-operator-7d68577dc5-5mlz7\" (UID: \"87e5e8ed-54a5-482b-b7b5-b932768f3a98\") " pod="tigera-operator/tigera-operator-7d68577dc5-5mlz7" Jan 30 06:17:23.962898 kubelet[2655]: I0130 06:17:23.962877 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8cjq\" (UniqueName: \"kubernetes.io/projected/87e5e8ed-54a5-482b-b7b5-b932768f3a98-kube-api-access-t8cjq\") pod \"tigera-operator-7d68577dc5-5mlz7\" (UID: \"87e5e8ed-54a5-482b-b7b5-b932768f3a98\") " pod="tigera-operator/tigera-operator-7d68577dc5-5mlz7" Jan 30 06:17:24.028318 containerd[1500]: time="2025-01-30T06:17:24.028167870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8pq9,Uid:a569ed59-f323-4992-baf8-8c7646001586,Namespace:kube-system,Attempt:0,}" Jan 30 06:17:24.055142 containerd[1500]: time="2025-01-30T06:17:24.055026844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:24.055142 containerd[1500]: time="2025-01-30T06:17:24.055096304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:24.056089 containerd[1500]: time="2025-01-30T06:17:24.055349900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:24.056089 containerd[1500]: time="2025-01-30T06:17:24.055960846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:24.083625 systemd[1]: run-containerd-runc-k8s.io-1e526e7dc0b588ce756ad604031821c91b27f891c9de7062c110e3e05226dbfe-runc.CjohGZ.mount: Deactivated successfully. Jan 30 06:17:24.091239 systemd[1]: Started cri-containerd-1e526e7dc0b588ce756ad604031821c91b27f891c9de7062c110e3e05226dbfe.scope - libcontainer container 1e526e7dc0b588ce756ad604031821c91b27f891c9de7062c110e3e05226dbfe. Jan 30 06:17:24.113650 containerd[1500]: time="2025-01-30T06:17:24.113611532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8pq9,Uid:a569ed59-f323-4992-baf8-8c7646001586,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e526e7dc0b588ce756ad604031821c91b27f891c9de7062c110e3e05226dbfe\"" Jan 30 06:17:24.117031 containerd[1500]: time="2025-01-30T06:17:24.117003725Z" level=info msg="CreateContainer within sandbox \"1e526e7dc0b588ce756ad604031821c91b27f891c9de7062c110e3e05226dbfe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 06:17:24.132142 containerd[1500]: time="2025-01-30T06:17:24.132095836Z" level=info msg="CreateContainer within sandbox \"1e526e7dc0b588ce756ad604031821c91b27f891c9de7062c110e3e05226dbfe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"170a7b8608ccb0939e22b907bd376f8c6c9077f3ae40090ead380b3366e90598\"" Jan 30 06:17:24.132633 containerd[1500]: time="2025-01-30T06:17:24.132583591Z" level=info msg="StartContainer for \"170a7b8608ccb0939e22b907bd376f8c6c9077f3ae40090ead380b3366e90598\"" Jan 30 06:17:24.159246 systemd[1]: Started cri-containerd-170a7b8608ccb0939e22b907bd376f8c6c9077f3ae40090ead380b3366e90598.scope - libcontainer container 170a7b8608ccb0939e22b907bd376f8c6c9077f3ae40090ead380b3366e90598. Jan 30 06:17:24.188655 containerd[1500]: time="2025-01-30T06:17:24.188560476Z" level=info msg="StartContainer for \"170a7b8608ccb0939e22b907bd376f8c6c9077f3ae40090ead380b3366e90598\" returns successfully" Jan 30 06:17:24.195708 containerd[1500]: time="2025-01-30T06:17:24.195683148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-5mlz7,Uid:87e5e8ed-54a5-482b-b7b5-b932768f3a98,Namespace:tigera-operator,Attempt:0,}" Jan 30 06:17:24.216002 containerd[1500]: time="2025-01-30T06:17:24.215731677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:24.216002 containerd[1500]: time="2025-01-30T06:17:24.215782874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:24.216002 containerd[1500]: time="2025-01-30T06:17:24.215795307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:24.216775 containerd[1500]: time="2025-01-30T06:17:24.216659267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:24.236348 systemd[1]: Started cri-containerd-4e8420e04f1315dd48794de349f3e9d077eb1ecfde9cb755a992acecec0f8fa3.scope - libcontainer container 4e8420e04f1315dd48794de349f3e9d077eb1ecfde9cb755a992acecec0f8fa3. Jan 30 06:17:24.273188 containerd[1500]: time="2025-01-30T06:17:24.273148464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-5mlz7,Uid:87e5e8ed-54a5-482b-b7b5-b932768f3a98,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4e8420e04f1315dd48794de349f3e9d077eb1ecfde9cb755a992acecec0f8fa3\"" Jan 30 06:17:24.275304 containerd[1500]: time="2025-01-30T06:17:24.275279221Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 06:17:24.784928 kubelet[2655]: I0130 06:17:24.784879 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c8pq9" podStartSLOduration=1.784864072 podStartE2EDuration="1.784864072s" podCreationTimestamp="2025-01-30 06:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 06:17:24.784540815 +0000 UTC m=+7.160111551" watchObservedRunningTime="2025-01-30 06:17:24.784864072 +0000 UTC m=+7.160434809" Jan 30 06:17:26.224725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327698112.mount: Deactivated successfully. Jan 30 06:17:26.590637 containerd[1500]: time="2025-01-30T06:17:26.590501123Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:26.591724 containerd[1500]: time="2025-01-30T06:17:26.591552335Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 06:17:26.592546 containerd[1500]: time="2025-01-30T06:17:26.592504131Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:26.594200 containerd[1500]: time="2025-01-30T06:17:26.594159255Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:26.595156 containerd[1500]: time="2025-01-30T06:17:26.594997727Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.319684622s" Jan 30 06:17:26.595156 containerd[1500]: time="2025-01-30T06:17:26.595030088Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 06:17:26.598233 containerd[1500]: time="2025-01-30T06:17:26.598087333Z" level=info msg="CreateContainer within sandbox \"4e8420e04f1315dd48794de349f3e9d077eb1ecfde9cb755a992acecec0f8fa3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 06:17:26.614280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2469637535.mount: Deactivated successfully. Jan 30 06:17:26.617351 containerd[1500]: time="2025-01-30T06:17:26.617312407Z" level=info msg="CreateContainer within sandbox \"4e8420e04f1315dd48794de349f3e9d077eb1ecfde9cb755a992acecec0f8fa3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c\"" Jan 30 06:17:26.617823 containerd[1500]: time="2025-01-30T06:17:26.617705354Z" level=info msg="StartContainer for \"161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c\"" Jan 30 06:17:26.646275 systemd[1]: Started cri-containerd-161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c.scope - libcontainer container 161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c. Jan 30 06:17:26.671221 containerd[1500]: time="2025-01-30T06:17:26.670894087Z" level=info msg="StartContainer for \"161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c\" returns successfully" Jan 30 06:17:27.718930 kubelet[2655]: I0130 06:17:27.718608 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-5mlz7" podStartSLOduration=2.397186198 podStartE2EDuration="4.718594005s" podCreationTimestamp="2025-01-30 06:17:23 +0000 UTC" firstStartedPulling="2025-01-30 06:17:24.274406925 +0000 UTC m=+6.649977660" lastFinishedPulling="2025-01-30 06:17:26.595814731 +0000 UTC m=+8.971385467" observedRunningTime="2025-01-30 06:17:26.808066941 +0000 UTC m=+9.183637697" watchObservedRunningTime="2025-01-30 06:17:27.718594005 +0000 UTC m=+10.094164741" Jan 30 06:17:29.734439 systemd[1]: Created slice kubepods-besteffort-pod41bb3004_e4ad_4cc9_a7de_dc2d30b14a04.slice - libcontainer container kubepods-besteffort-pod41bb3004_e4ad_4cc9_a7de_dc2d30b14a04.slice. Jan 30 06:17:29.784428 systemd[1]: Created slice kubepods-besteffort-pod54b87580_46dc_4595_a2b2_8b2f0959f962.slice - libcontainer container kubepods-besteffort-pod54b87580_46dc_4595_a2b2_8b2f0959f962.slice. Jan 30 06:17:29.805017 kubelet[2655]: I0130 06:17:29.804953 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-flexvol-driver-host\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.805017 kubelet[2655]: I0130 06:17:29.805025 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-policysync\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.805813 kubelet[2655]: I0130 06:17:29.805069 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/54b87580-46dc-4595-a2b2-8b2f0959f962-node-certs\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.805813 kubelet[2655]: I0130 06:17:29.805088 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-net-dir\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.805813 kubelet[2655]: I0130 06:17:29.805101 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmqc2\" (UniqueName: \"kubernetes.io/projected/54b87580-46dc-4595-a2b2-8b2f0959f962-kube-api-access-wmqc2\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.805813 kubelet[2655]: I0130 06:17:29.805146 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msf5n\" (UniqueName: \"kubernetes.io/projected/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-kube-api-access-msf5n\") pod \"calico-typha-f7f987868-v49j8\" (UID: \"41bb3004-e4ad-4cc9-a7de-dc2d30b14a04\") " pod="calico-system/calico-typha-f7f987868-v49j8" Jan 30 06:17:29.805813 kubelet[2655]: I0130 06:17:29.805162 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-var-run-calico\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.805918 kubelet[2655]: I0130 06:17:29.805178 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-lib-modules\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.805918 kubelet[2655]: I0130 06:17:29.805210 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-log-dir\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.805918 kubelet[2655]: I0130 06:17:29.805233 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b87580-46dc-4595-a2b2-8b2f0959f962-tigera-ca-bundle\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.805918 kubelet[2655]: I0130 06:17:29.805250 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-tigera-ca-bundle\") pod \"calico-typha-f7f987868-v49j8\" (UID: \"41bb3004-e4ad-4cc9-a7de-dc2d30b14a04\") " pod="calico-system/calico-typha-f7f987868-v49j8" Jan 30 06:17:29.805918 kubelet[2655]: I0130 06:17:29.805267 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-typha-certs\") pod \"calico-typha-f7f987868-v49j8\" (UID: \"41bb3004-e4ad-4cc9-a7de-dc2d30b14a04\") " pod="calico-system/calico-typha-f7f987868-v49j8" Jan 30 06:17:29.806042 kubelet[2655]: I0130 06:17:29.805503 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-xtables-lock\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.806042 kubelet[2655]: I0130 06:17:29.805522 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-bin-dir\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.806042 kubelet[2655]: I0130 06:17:29.805537 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-var-lib-calico\") pod \"calico-node-rv56k\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " pod="calico-system/calico-node-rv56k" Jan 30 06:17:29.895052 kubelet[2655]: E0130 06:17:29.894412 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghtnv" podUID="60be3982-84c5-43fa-a1af-7b15bfd904a3" Jan 30 06:17:29.928617 kubelet[2655]: E0130 06:17:29.928574 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.928617 kubelet[2655]: W0130 06:17:29.928617 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.928770 kubelet[2655]: E0130 06:17:29.928747 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.929219 kubelet[2655]: E0130 06:17:29.929194 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.929219 kubelet[2655]: W0130 06:17:29.929217 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.930290 kubelet[2655]: E0130 06:17:29.929330 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.930290 kubelet[2655]: E0130 06:17:29.930284 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.930342 kubelet[2655]: W0130 06:17:29.930294 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.931181 kubelet[2655]: E0130 06:17:29.931157 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.931594 kubelet[2655]: E0130 06:17:29.931573 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.931633 kubelet[2655]: W0130 06:17:29.931591 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.931969 kubelet[2655]: E0130 06:17:29.931770 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.931969 kubelet[2655]: W0130 06:17:29.931780 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.933118 kubelet[2655]: E0130 06:17:29.932030 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.933118 kubelet[2655]: W0130 06:17:29.932043 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.933118 kubelet[2655]: E0130 06:17:29.932307 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.933118 kubelet[2655]: W0130 06:17:29.932315 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.933118 kubelet[2655]: E0130 06:17:29.932325 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.933862 kubelet[2655]: E0130 06:17:29.933789 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.933862 kubelet[2655]: E0130 06:17:29.933827 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.933862 kubelet[2655]: E0130 06:17:29.933844 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.934250 kubelet[2655]: E0130 06:17:29.934042 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.934250 kubelet[2655]: W0130 06:17:29.934052 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.934250 kubelet[2655]: E0130 06:17:29.934072 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.934474 kubelet[2655]: E0130 06:17:29.934439 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.934474 kubelet[2655]: W0130 06:17:29.934465 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.934626 kubelet[2655]: E0130 06:17:29.934476 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.934650 kubelet[2655]: E0130 06:17:29.934643 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.934672 kubelet[2655]: W0130 06:17:29.934650 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.934672 kubelet[2655]: E0130 06:17:29.934658 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.935253 kubelet[2655]: E0130 06:17:29.934841 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.935253 kubelet[2655]: W0130 06:17:29.934854 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.935253 kubelet[2655]: E0130 06:17:29.934862 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.935253 kubelet[2655]: E0130 06:17:29.935156 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.935253 kubelet[2655]: W0130 06:17:29.935165 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.935253 kubelet[2655]: E0130 06:17:29.935173 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.935566 kubelet[2655]: E0130 06:17:29.935339 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.935566 kubelet[2655]: W0130 06:17:29.935346 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.935566 kubelet[2655]: E0130 06:17:29.935354 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.935566 kubelet[2655]: E0130 06:17:29.935541 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.935566 kubelet[2655]: W0130 06:17:29.935548 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.935566 kubelet[2655]: E0130 06:17:29.935555 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.942919 kubelet[2655]: E0130 06:17:29.942358 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.942919 kubelet[2655]: W0130 06:17:29.942371 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.942919 kubelet[2655]: E0130 06:17:29.942382 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.949532 kubelet[2655]: E0130 06:17:29.949517 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.949665 kubelet[2655]: W0130 06:17:29.949575 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.949665 kubelet[2655]: E0130 06:17:29.949590 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.956838 kubelet[2655]: E0130 06:17:29.956767 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.956838 kubelet[2655]: W0130 06:17:29.956780 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.956838 kubelet[2655]: E0130 06:17:29.956793 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.987573 kubelet[2655]: E0130 06:17:29.987470 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.987573 kubelet[2655]: W0130 06:17:29.987496 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.987573 kubelet[2655]: E0130 06:17:29.987516 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.988818 kubelet[2655]: E0130 06:17:29.987864 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.988818 kubelet[2655]: W0130 06:17:29.987874 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.988818 kubelet[2655]: E0130 06:17:29.987885 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.988818 kubelet[2655]: E0130 06:17:29.988384 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.988818 kubelet[2655]: W0130 06:17:29.988401 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.988818 kubelet[2655]: E0130 06:17:29.988416 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.988818 kubelet[2655]: E0130 06:17:29.988758 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.988818 kubelet[2655]: W0130 06:17:29.988767 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.988818 kubelet[2655]: E0130 06:17:29.988776 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.990711 kubelet[2655]: E0130 06:17:29.990168 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.990711 kubelet[2655]: W0130 06:17:29.990270 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.990711 kubelet[2655]: E0130 06:17:29.990280 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.990711 kubelet[2655]: E0130 06:17:29.990607 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.990711 kubelet[2655]: W0130 06:17:29.990615 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.990711 kubelet[2655]: E0130 06:17:29.990624 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.991278 kubelet[2655]: E0130 06:17:29.990802 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.991278 kubelet[2655]: W0130 06:17:29.990809 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.991278 kubelet[2655]: E0130 06:17:29.990817 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.991278 kubelet[2655]: E0130 06:17:29.991050 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.991278 kubelet[2655]: W0130 06:17:29.991058 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.991278 kubelet[2655]: E0130 06:17:29.991070 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.991278 kubelet[2655]: E0130 06:17:29.991274 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.991278 kubelet[2655]: W0130 06:17:29.991281 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.991503 kubelet[2655]: E0130 06:17:29.991289 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.991503 kubelet[2655]: E0130 06:17:29.991479 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.991503 kubelet[2655]: W0130 06:17:29.991487 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.991503 kubelet[2655]: E0130 06:17:29.991494 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.991944 kubelet[2655]: E0130 06:17:29.991671 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.991944 kubelet[2655]: W0130 06:17:29.991685 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.991944 kubelet[2655]: E0130 06:17:29.991693 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.991944 kubelet[2655]: E0130 06:17:29.991879 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.991944 kubelet[2655]: W0130 06:17:29.991887 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.991944 kubelet[2655]: E0130 06:17:29.991894 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.993303 kubelet[2655]: E0130 06:17:29.992277 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.993303 kubelet[2655]: W0130 06:17:29.992291 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.993303 kubelet[2655]: E0130 06:17:29.992307 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.993303 kubelet[2655]: E0130 06:17:29.992786 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.993303 kubelet[2655]: W0130 06:17:29.992798 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.993303 kubelet[2655]: E0130 06:17:29.992812 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.996634 kubelet[2655]: E0130 06:17:29.993348 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.996634 kubelet[2655]: W0130 06:17:29.993357 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.996634 kubelet[2655]: E0130 06:17:29.993365 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.996634 kubelet[2655]: E0130 06:17:29.993548 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.996634 kubelet[2655]: W0130 06:17:29.993555 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.996634 kubelet[2655]: E0130 06:17:29.993562 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.996634 kubelet[2655]: E0130 06:17:29.993731 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.996634 kubelet[2655]: W0130 06:17:29.993738 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.996634 kubelet[2655]: E0130 06:17:29.993745 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.996634 kubelet[2655]: E0130 06:17:29.993887 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.996829 kubelet[2655]: W0130 06:17:29.993895 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.996829 kubelet[2655]: E0130 06:17:29.993903 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.996829 kubelet[2655]: E0130 06:17:29.995210 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.996829 kubelet[2655]: W0130 06:17:29.995233 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.996829 kubelet[2655]: E0130 06:17:29.995245 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:29.996829 kubelet[2655]: E0130 06:17:29.995417 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:29.996829 kubelet[2655]: W0130 06:17:29.995424 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:29.996829 kubelet[2655]: E0130 06:17:29.995432 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.007946 kubelet[2655]: E0130 06:17:30.007918 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.007946 kubelet[2655]: W0130 06:17:30.007940 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.008094 kubelet[2655]: E0130 06:17:30.007957 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.008094 kubelet[2655]: I0130 06:17:30.007983 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/60be3982-84c5-43fa-a1af-7b15bfd904a3-varrun\") pod \"csi-node-driver-ghtnv\" (UID: \"60be3982-84c5-43fa-a1af-7b15bfd904a3\") " pod="calico-system/csi-node-driver-ghtnv" Jan 30 06:17:30.008222 kubelet[2655]: E0130 06:17:30.008206 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.008222 kubelet[2655]: W0130 06:17:30.008216 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.008506 kubelet[2655]: E0130 06:17:30.008233 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.008506 kubelet[2655]: I0130 06:17:30.008248 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60be3982-84c5-43fa-a1af-7b15bfd904a3-kubelet-dir\") pod \"csi-node-driver-ghtnv\" (UID: \"60be3982-84c5-43fa-a1af-7b15bfd904a3\") " pod="calico-system/csi-node-driver-ghtnv" Jan 30 06:17:30.008746 kubelet[2655]: E0130 06:17:30.008583 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.008746 kubelet[2655]: W0130 06:17:30.008593 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.008746 kubelet[2655]: E0130 06:17:30.008637 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.008871 kubelet[2655]: E0130 06:17:30.008859 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.008945 kubelet[2655]: W0130 06:17:30.008933 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.009022 kubelet[2655]: E0130 06:17:30.009011 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.009486 kubelet[2655]: E0130 06:17:30.009250 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.009486 kubelet[2655]: W0130 06:17:30.009259 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.009486 kubelet[2655]: E0130 06:17:30.009271 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.009486 kubelet[2655]: I0130 06:17:30.009287 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9v2q\" (UniqueName: \"kubernetes.io/projected/60be3982-84c5-43fa-a1af-7b15bfd904a3-kube-api-access-x9v2q\") pod \"csi-node-driver-ghtnv\" (UID: \"60be3982-84c5-43fa-a1af-7b15bfd904a3\") " pod="calico-system/csi-node-driver-ghtnv" Jan 30 06:17:30.009486 kubelet[2655]: E0130 06:17:30.009469 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.009486 kubelet[2655]: W0130 06:17:30.009477 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.009899 kubelet[2655]: E0130 06:17:30.009650 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.009899 kubelet[2655]: E0130 06:17:30.009658 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.009899 kubelet[2655]: W0130 06:17:30.009660 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.009899 kubelet[2655]: I0130 06:17:30.009688 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/60be3982-84c5-43fa-a1af-7b15bfd904a3-socket-dir\") pod \"csi-node-driver-ghtnv\" (UID: \"60be3982-84c5-43fa-a1af-7b15bfd904a3\") " pod="calico-system/csi-node-driver-ghtnv" Jan 30 06:17:30.009899 kubelet[2655]: E0130 06:17:30.009690 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.009899 kubelet[2655]: E0130 06:17:30.009851 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.009899 kubelet[2655]: W0130 06:17:30.009861 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.009899 kubelet[2655]: E0130 06:17:30.009877 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.011057 kubelet[2655]: E0130 06:17:30.010036 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.011057 kubelet[2655]: W0130 06:17:30.010043 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.011057 kubelet[2655]: E0130 06:17:30.010053 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.011057 kubelet[2655]: I0130 06:17:30.010068 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/60be3982-84c5-43fa-a1af-7b15bfd904a3-registration-dir\") pod \"csi-node-driver-ghtnv\" (UID: \"60be3982-84c5-43fa-a1af-7b15bfd904a3\") " pod="calico-system/csi-node-driver-ghtnv" Jan 30 06:17:30.011057 kubelet[2655]: E0130 06:17:30.010247 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.011057 kubelet[2655]: W0130 06:17:30.010254 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.011057 kubelet[2655]: E0130 06:17:30.010262 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.011057 kubelet[2655]: E0130 06:17:30.010417 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.011057 kubelet[2655]: W0130 06:17:30.010424 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.011350 kubelet[2655]: E0130 06:17:30.010433 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.011350 kubelet[2655]: E0130 06:17:30.010627 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.011350 kubelet[2655]: W0130 06:17:30.010635 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.011350 kubelet[2655]: E0130 06:17:30.010642 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.011350 kubelet[2655]: E0130 06:17:30.010791 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.011350 kubelet[2655]: W0130 06:17:30.010799 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.011350 kubelet[2655]: E0130 06:17:30.010806 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.011350 kubelet[2655]: E0130 06:17:30.011009 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.011350 kubelet[2655]: W0130 06:17:30.011017 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.011350 kubelet[2655]: E0130 06:17:30.011024 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.011542 kubelet[2655]: E0130 06:17:30.011244 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.011542 kubelet[2655]: W0130 06:17:30.011251 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.011542 kubelet[2655]: E0130 06:17:30.011261 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.041908 containerd[1500]: time="2025-01-30T06:17:30.041869203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f7f987868-v49j8,Uid:41bb3004-e4ad-4cc9-a7de-dc2d30b14a04,Namespace:calico-system,Attempt:0,}" Jan 30 06:17:30.082988 containerd[1500]: time="2025-01-30T06:17:30.082292705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:30.083559 containerd[1500]: time="2025-01-30T06:17:30.083152485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:30.083559 containerd[1500]: time="2025-01-30T06:17:30.083169708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:30.083559 containerd[1500]: time="2025-01-30T06:17:30.083247334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:30.088478 containerd[1500]: time="2025-01-30T06:17:30.088165539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rv56k,Uid:54b87580-46dc-4595-a2b2-8b2f0959f962,Namespace:calico-system,Attempt:0,}" Jan 30 06:17:30.112180 kubelet[2655]: E0130 06:17:30.111614 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.112180 kubelet[2655]: W0130 06:17:30.111638 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.112180 kubelet[2655]: E0130 06:17:30.111657 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.112583 kubelet[2655]: E0130 06:17:30.112444 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.112583 kubelet[2655]: W0130 06:17:30.112477 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.112583 kubelet[2655]: E0130 06:17:30.112499 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.113378 kubelet[2655]: E0130 06:17:30.113227 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.113378 kubelet[2655]: W0130 06:17:30.113240 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.113378 kubelet[2655]: E0130 06:17:30.113260 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.116647 kubelet[2655]: E0130 06:17:30.116360 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.116647 kubelet[2655]: W0130 06:17:30.116374 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.116647 kubelet[2655]: E0130 06:17:30.116388 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.116647 kubelet[2655]: E0130 06:17:30.116575 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.116647 kubelet[2655]: W0130 06:17:30.116582 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.116647 kubelet[2655]: E0130 06:17:30.116590 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.117394 kubelet[2655]: E0130 06:17:30.117282 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.117435 kubelet[2655]: W0130 06:17:30.117297 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.117480 kubelet[2655]: E0130 06:17:30.117439 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.118034 kubelet[2655]: E0130 06:17:30.118009 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.118034 kubelet[2655]: W0130 06:17:30.118028 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.118099 kubelet[2655]: E0130 06:17:30.118045 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.119049 kubelet[2655]: E0130 06:17:30.119020 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.119049 kubelet[2655]: W0130 06:17:30.119038 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.120434 kubelet[2655]: E0130 06:17:30.119207 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.120434 kubelet[2655]: E0130 06:17:30.119543 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.120434 kubelet[2655]: W0130 06:17:30.119553 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.120434 kubelet[2655]: E0130 06:17:30.119684 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.120434 kubelet[2655]: E0130 06:17:30.120180 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.120434 kubelet[2655]: W0130 06:17:30.120192 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.120581 kubelet[2655]: E0130 06:17:30.120524 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.120581 kubelet[2655]: W0130 06:17:30.120535 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.120716 kubelet[2655]: E0130 06:17:30.120694 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124129 kubelet[2655]: E0130 06:17:30.120776 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124129 kubelet[2655]: E0130 06:17:30.120907 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124129 kubelet[2655]: W0130 06:17:30.120915 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124129 kubelet[2655]: E0130 06:17:30.120926 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124129 kubelet[2655]: E0130 06:17:30.121182 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124129 kubelet[2655]: W0130 06:17:30.121190 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124129 kubelet[2655]: E0130 06:17:30.121209 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124129 kubelet[2655]: E0130 06:17:30.121431 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124129 kubelet[2655]: W0130 06:17:30.121438 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124129 kubelet[2655]: E0130 06:17:30.121473 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124359 kubelet[2655]: E0130 06:17:30.121710 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124359 kubelet[2655]: W0130 06:17:30.121721 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124359 kubelet[2655]: E0130 06:17:30.121738 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124359 kubelet[2655]: E0130 06:17:30.121989 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124359 kubelet[2655]: W0130 06:17:30.121999 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124359 kubelet[2655]: E0130 06:17:30.122092 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124359 kubelet[2655]: E0130 06:17:30.122488 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124359 kubelet[2655]: W0130 06:17:30.122497 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124359 kubelet[2655]: E0130 06:17:30.122590 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124359 kubelet[2655]: E0130 06:17:30.122736 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124543 kubelet[2655]: W0130 06:17:30.122746 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124543 kubelet[2655]: E0130 06:17:30.122851 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124543 kubelet[2655]: E0130 06:17:30.123032 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124543 kubelet[2655]: W0130 06:17:30.123073 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124543 kubelet[2655]: E0130 06:17:30.123092 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124543 kubelet[2655]: E0130 06:17:30.123438 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124543 kubelet[2655]: W0130 06:17:30.123461 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124543 kubelet[2655]: E0130 06:17:30.123491 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124543 kubelet[2655]: E0130 06:17:30.123937 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124543 kubelet[2655]: W0130 06:17:30.123948 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124713 kubelet[2655]: E0130 06:17:30.123963 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124713 kubelet[2655]: E0130 06:17:30.124234 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124713 kubelet[2655]: W0130 06:17:30.124245 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124713 kubelet[2655]: E0130 06:17:30.124325 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124713 kubelet[2655]: E0130 06:17:30.124607 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124713 kubelet[2655]: W0130 06:17:30.124618 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.124713 kubelet[2655]: E0130 06:17:30.124648 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.124997 kubelet[2655]: E0130 06:17:30.124978 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.124997 kubelet[2655]: W0130 06:17:30.124994 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.125060 kubelet[2655]: E0130 06:17:30.125009 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.125325 kubelet[2655]: E0130 06:17:30.125310 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.125325 kubelet[2655]: W0130 06:17:30.125325 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.125387 kubelet[2655]: E0130 06:17:30.125337 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.135317 systemd[1]: Started cri-containerd-37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08.scope - libcontainer container 37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08. Jan 30 06:17:30.152534 kubelet[2655]: E0130 06:17:30.152506 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 06:17:30.152534 kubelet[2655]: W0130 06:17:30.152526 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 06:17:30.153147 kubelet[2655]: E0130 06:17:30.152542 2655 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 06:17:30.154070 containerd[1500]: time="2025-01-30T06:17:30.153997740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:30.154218 containerd[1500]: time="2025-01-30T06:17:30.154189730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:30.154302 containerd[1500]: time="2025-01-30T06:17:30.154280280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:30.154790 containerd[1500]: time="2025-01-30T06:17:30.154747165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:30.188245 systemd[1]: Started cri-containerd-59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8.scope - libcontainer container 59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8. Jan 30 06:17:30.217134 containerd[1500]: time="2025-01-30T06:17:30.216830813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rv56k,Uid:54b87580-46dc-4595-a2b2-8b2f0959f962,Namespace:calico-system,Attempt:0,} returns sandbox id \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\"" Jan 30 06:17:30.231963 containerd[1500]: time="2025-01-30T06:17:30.231895579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 06:17:30.273960 containerd[1500]: time="2025-01-30T06:17:30.272838624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f7f987868-v49j8,Uid:41bb3004-e4ad-4cc9-a7de-dc2d30b14a04,Namespace:calico-system,Attempt:0,} returns sandbox id \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\"" Jan 30 06:17:31.735240 kubelet[2655]: E0130 06:17:31.734182 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghtnv" podUID="60be3982-84c5-43fa-a1af-7b15bfd904a3" Jan 30 06:17:31.797693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1736333116.mount: Deactivated successfully. Jan 30 06:17:31.881816 containerd[1500]: time="2025-01-30T06:17:31.881746251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:31.882712 containerd[1500]: time="2025-01-30T06:17:31.882657298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 06:17:31.883638 containerd[1500]: time="2025-01-30T06:17:31.883584385Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:31.885832 containerd[1500]: time="2025-01-30T06:17:31.885809224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:31.886726 containerd[1500]: time="2025-01-30T06:17:31.886332845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.654252541s" Jan 30 06:17:31.886726 containerd[1500]: time="2025-01-30T06:17:31.886375665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 06:17:31.887539 containerd[1500]: time="2025-01-30T06:17:31.887332227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 06:17:31.888534 containerd[1500]: time="2025-01-30T06:17:31.888400750Z" level=info msg="CreateContainer within sandbox \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 06:17:31.904380 containerd[1500]: time="2025-01-30T06:17:31.904312485Z" level=info msg="CreateContainer within sandbox \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a\"" Jan 30 06:17:31.905345 containerd[1500]: time="2025-01-30T06:17:31.905312759Z" level=info msg="StartContainer for \"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a\"" Jan 30 06:17:31.953231 systemd[1]: Started cri-containerd-a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a.scope - libcontainer container a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a. Jan 30 06:17:31.988198 containerd[1500]: time="2025-01-30T06:17:31.986682767Z" level=info msg="StartContainer for \"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a\" returns successfully" Jan 30 06:17:32.003204 systemd[1]: cri-containerd-a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a.scope: Deactivated successfully. Jan 30 06:17:32.029199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a-rootfs.mount: Deactivated successfully. Jan 30 06:17:32.050535 containerd[1500]: time="2025-01-30T06:17:32.050468269Z" level=info msg="shim disconnected" id=a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a namespace=k8s.io Jan 30 06:17:32.050535 containerd[1500]: time="2025-01-30T06:17:32.050529844Z" level=warning msg="cleaning up after shim disconnected" id=a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a namespace=k8s.io Jan 30 06:17:32.050535 containerd[1500]: time="2025-01-30T06:17:32.050538762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:17:33.755374 kubelet[2655]: E0130 06:17:33.755309 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghtnv" podUID="60be3982-84c5-43fa-a1af-7b15bfd904a3" Jan 30 06:17:34.441485 containerd[1500]: time="2025-01-30T06:17:34.441434942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:34.442676 containerd[1500]: time="2025-01-30T06:17:34.442548379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 06:17:34.444427 containerd[1500]: time="2025-01-30T06:17:34.443395357Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:34.445842 containerd[1500]: time="2025-01-30T06:17:34.445310475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:34.445842 containerd[1500]: time="2025-01-30T06:17:34.445732015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.558372116s" Jan 30 06:17:34.445842 containerd[1500]: time="2025-01-30T06:17:34.445756000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 06:17:34.448681 containerd[1500]: time="2025-01-30T06:17:34.448638622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 06:17:34.460701 containerd[1500]: time="2025-01-30T06:17:34.460533863Z" level=info msg="CreateContainer within sandbox \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 06:17:34.475485 containerd[1500]: time="2025-01-30T06:17:34.475449193Z" level=info msg="CreateContainer within sandbox \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\"" Jan 30 06:17:34.475989 containerd[1500]: time="2025-01-30T06:17:34.475964589Z" level=info msg="StartContainer for \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\"" Jan 30 06:17:34.527256 systemd[1]: Started cri-containerd-781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee.scope - libcontainer container 781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee. Jan 30 06:17:34.574398 containerd[1500]: time="2025-01-30T06:17:34.574348381Z" level=info msg="StartContainer for \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\" returns successfully" Jan 30 06:17:34.852910 kubelet[2655]: I0130 06:17:34.852848 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f7f987868-v49j8" podStartSLOduration=1.6801205430000001 podStartE2EDuration="5.852823397s" podCreationTimestamp="2025-01-30 06:17:29 +0000 UTC" firstStartedPulling="2025-01-30 06:17:30.27393018 +0000 UTC m=+12.649500916" lastFinishedPulling="2025-01-30 06:17:34.446633034 +0000 UTC m=+16.822203770" observedRunningTime="2025-01-30 06:17:34.852058314 +0000 UTC m=+17.227629060" watchObservedRunningTime="2025-01-30 06:17:34.852823397 +0000 UTC m=+17.228394143" Jan 30 06:17:35.734005 kubelet[2655]: E0130 06:17:35.733952 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghtnv" podUID="60be3982-84c5-43fa-a1af-7b15bfd904a3" Jan 30 06:17:35.831182 kubelet[2655]: I0130 06:17:35.831147 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:17:37.735924 kubelet[2655]: E0130 06:17:37.734508 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghtnv" podUID="60be3982-84c5-43fa-a1af-7b15bfd904a3" Jan 30 06:17:39.181809 containerd[1500]: time="2025-01-30T06:17:39.181749967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:39.182794 containerd[1500]: time="2025-01-30T06:17:39.182735825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 06:17:39.183931 containerd[1500]: time="2025-01-30T06:17:39.183900448Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:39.186043 containerd[1500]: time="2025-01-30T06:17:39.185987939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:39.186949 containerd[1500]: time="2025-01-30T06:17:39.186478890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.737799832s" Jan 30 06:17:39.186949 containerd[1500]: time="2025-01-30T06:17:39.186518154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 06:17:39.189411 containerd[1500]: time="2025-01-30T06:17:39.189347757Z" level=info msg="CreateContainer within sandbox \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 06:17:39.202924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3317170722.mount: Deactivated successfully. Jan 30 06:17:39.204391 containerd[1500]: time="2025-01-30T06:17:39.204341407Z" level=info msg="CreateContainer within sandbox \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15\"" Jan 30 06:17:39.205826 containerd[1500]: time="2025-01-30T06:17:39.205227577Z" level=info msg="StartContainer for \"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15\"" Jan 30 06:17:39.255256 systemd[1]: Started cri-containerd-15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15.scope - libcontainer container 15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15. Jan 30 06:17:39.285948 containerd[1500]: time="2025-01-30T06:17:39.285875991Z" level=info msg="StartContainer for \"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15\" returns successfully" Jan 30 06:17:39.688928 systemd[1]: cri-containerd-15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15.scope: Deactivated successfully. Jan 30 06:17:39.725949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15-rootfs.mount: Deactivated successfully. Jan 30 06:17:39.737576 kubelet[2655]: E0130 06:17:39.736470 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghtnv" podUID="60be3982-84c5-43fa-a1af-7b15bfd904a3" Jan 30 06:17:39.774291 containerd[1500]: time="2025-01-30T06:17:39.773987960Z" level=info msg="shim disconnected" id=15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15 namespace=k8s.io Jan 30 06:17:39.774291 containerd[1500]: time="2025-01-30T06:17:39.774043464Z" level=warning msg="cleaning up after shim disconnected" id=15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15 namespace=k8s.io Jan 30 06:17:39.774291 containerd[1500]: time="2025-01-30T06:17:39.774052150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:17:39.785736 kubelet[2655]: I0130 06:17:39.785718 2655 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 06:17:39.831942 systemd[1]: Created slice kubepods-burstable-pode4a0874f_e29f_4582_aee0_6703edd12dfa.slice - libcontainer container kubepods-burstable-pode4a0874f_e29f_4582_aee0_6703edd12dfa.slice. Jan 30 06:17:39.846091 systemd[1]: Created slice kubepods-besteffort-podcf422376_3d63_4f20_9072_0d2f8b49abb2.slice - libcontainer container kubepods-besteffort-podcf422376_3d63_4f20_9072_0d2f8b49abb2.slice. Jan 30 06:17:39.857671 containerd[1500]: time="2025-01-30T06:17:39.857490964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 06:17:39.861274 systemd[1]: Created slice kubepods-burstable-pod8e994494_74b7_4b5f_9da4_52083e102b26.slice - libcontainer container kubepods-burstable-pod8e994494_74b7_4b5f_9da4_52083e102b26.slice. Jan 30 06:17:39.872278 systemd[1]: Created slice kubepods-besteffort-podf41a5a96_3618_4093_895b_a47df0dad582.slice - libcontainer container kubepods-besteffort-podf41a5a96_3618_4093_895b_a47df0dad582.slice. Jan 30 06:17:39.882894 systemd[1]: Created slice kubepods-besteffort-pod41640a56_2c47_4ec4_99af_6f90c868637c.slice - libcontainer container kubepods-besteffort-pod41640a56_2c47_4ec4_99af_6f90c868637c.slice. Jan 30 06:17:39.895232 kubelet[2655]: I0130 06:17:39.894807 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cf422376-3d63-4f20-9072-0d2f8b49abb2-calico-apiserver-certs\") pod \"calico-apiserver-85d85c8757-nkrq9\" (UID: \"cf422376-3d63-4f20-9072-0d2f8b49abb2\") " pod="calico-apiserver/calico-apiserver-85d85c8757-nkrq9" Jan 30 06:17:39.895232 kubelet[2655]: I0130 06:17:39.894852 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41640a56-2c47-4ec4-99af-6f90c868637c-tigera-ca-bundle\") pod \"calico-kube-controllers-6b57545dfb-9fz6l\" (UID: \"41640a56-2c47-4ec4-99af-6f90c868637c\") " pod="calico-system/calico-kube-controllers-6b57545dfb-9fz6l" Jan 30 06:17:39.895232 kubelet[2655]: I0130 06:17:39.894893 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p862g\" (UniqueName: \"kubernetes.io/projected/e4a0874f-e29f-4582-aee0-6703edd12dfa-kube-api-access-p862g\") pod \"coredns-668d6bf9bc-pvvt2\" (UID: \"e4a0874f-e29f-4582-aee0-6703edd12dfa\") " pod="kube-system/coredns-668d6bf9bc-pvvt2" Jan 30 06:17:39.895232 kubelet[2655]: I0130 06:17:39.894907 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d7hb\" (UniqueName: \"kubernetes.io/projected/41640a56-2c47-4ec4-99af-6f90c868637c-kube-api-access-2d7hb\") pod \"calico-kube-controllers-6b57545dfb-9fz6l\" (UID: \"41640a56-2c47-4ec4-99af-6f90c868637c\") " pod="calico-system/calico-kube-controllers-6b57545dfb-9fz6l" Jan 30 06:17:39.895232 kubelet[2655]: I0130 06:17:39.894923 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6c2h\" (UniqueName: \"kubernetes.io/projected/f41a5a96-3618-4093-895b-a47df0dad582-kube-api-access-v6c2h\") pod \"calico-apiserver-85d85c8757-jvjkv\" (UID: \"f41a5a96-3618-4093-895b-a47df0dad582\") " pod="calico-apiserver/calico-apiserver-85d85c8757-jvjkv" Jan 30 06:17:39.896967 kubelet[2655]: I0130 06:17:39.894935 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8mz\" (UniqueName: \"kubernetes.io/projected/8e994494-74b7-4b5f-9da4-52083e102b26-kube-api-access-8w8mz\") pod \"coredns-668d6bf9bc-twvvf\" (UID: \"8e994494-74b7-4b5f-9da4-52083e102b26\") " pod="kube-system/coredns-668d6bf9bc-twvvf" Jan 30 06:17:39.896967 kubelet[2655]: I0130 06:17:39.894957 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4a0874f-e29f-4582-aee0-6703edd12dfa-config-volume\") pod \"coredns-668d6bf9bc-pvvt2\" (UID: \"e4a0874f-e29f-4582-aee0-6703edd12dfa\") " pod="kube-system/coredns-668d6bf9bc-pvvt2" Jan 30 06:17:39.896967 kubelet[2655]: I0130 06:17:39.894971 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vplt\" (UniqueName: \"kubernetes.io/projected/cf422376-3d63-4f20-9072-0d2f8b49abb2-kube-api-access-8vplt\") pod \"calico-apiserver-85d85c8757-nkrq9\" (UID: \"cf422376-3d63-4f20-9072-0d2f8b49abb2\") " pod="calico-apiserver/calico-apiserver-85d85c8757-nkrq9" Jan 30 06:17:39.896967 kubelet[2655]: I0130 06:17:39.895903 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f41a5a96-3618-4093-895b-a47df0dad582-calico-apiserver-certs\") pod \"calico-apiserver-85d85c8757-jvjkv\" (UID: \"f41a5a96-3618-4093-895b-a47df0dad582\") " pod="calico-apiserver/calico-apiserver-85d85c8757-jvjkv" Jan 30 06:17:39.896967 kubelet[2655]: I0130 06:17:39.895974 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e994494-74b7-4b5f-9da4-52083e102b26-config-volume\") pod \"coredns-668d6bf9bc-twvvf\" (UID: \"8e994494-74b7-4b5f-9da4-52083e102b26\") " pod="kube-system/coredns-668d6bf9bc-twvvf" Jan 30 06:17:40.144959 containerd[1500]: time="2025-01-30T06:17:40.144882014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvvt2,Uid:e4a0874f-e29f-4582-aee0-6703edd12dfa,Namespace:kube-system,Attempt:0,}" Jan 30 06:17:40.154911 containerd[1500]: time="2025-01-30T06:17:40.154609945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d85c8757-nkrq9,Uid:cf422376-3d63-4f20-9072-0d2f8b49abb2,Namespace:calico-apiserver,Attempt:0,}" Jan 30 06:17:40.171945 containerd[1500]: time="2025-01-30T06:17:40.171637237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-twvvf,Uid:8e994494-74b7-4b5f-9da4-52083e102b26,Namespace:kube-system,Attempt:0,}" Jan 30 06:17:40.194903 containerd[1500]: time="2025-01-30T06:17:40.194843749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b57545dfb-9fz6l,Uid:41640a56-2c47-4ec4-99af-6f90c868637c,Namespace:calico-system,Attempt:0,}" Jan 30 06:17:40.195994 containerd[1500]: time="2025-01-30T06:17:40.195561775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d85c8757-jvjkv,Uid:f41a5a96-3618-4093-895b-a47df0dad582,Namespace:calico-apiserver,Attempt:0,}" Jan 30 06:17:40.450973 containerd[1500]: time="2025-01-30T06:17:40.450870543Z" level=error msg="Failed to destroy network for sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.454680 containerd[1500]: time="2025-01-30T06:17:40.454652381Z" level=error msg="encountered an error cleaning up failed sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.454806 containerd[1500]: time="2025-01-30T06:17:40.454785951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-twvvf,Uid:8e994494-74b7-4b5f-9da4-52083e102b26,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.455297 kubelet[2655]: E0130 06:17:40.455179 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.455491 containerd[1500]: time="2025-01-30T06:17:40.455468761Z" level=error msg="Failed to destroy network for sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.455621 kubelet[2655]: E0130 06:17:40.455598 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-twvvf" Jan 30 06:17:40.455723 kubelet[2655]: E0130 06:17:40.455705 2655 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-twvvf" Jan 30 06:17:40.455865 kubelet[2655]: E0130 06:17:40.455821 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-twvvf_kube-system(8e994494-74b7-4b5f-9da4-52083e102b26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-twvvf_kube-system(8e994494-74b7-4b5f-9da4-52083e102b26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-twvvf" podUID="8e994494-74b7-4b5f-9da4-52083e102b26" Jan 30 06:17:40.456819 containerd[1500]: time="2025-01-30T06:17:40.456603537Z" level=error msg="encountered an error cleaning up failed sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.456819 containerd[1500]: time="2025-01-30T06:17:40.456655845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d85c8757-jvjkv,Uid:f41a5a96-3618-4093-895b-a47df0dad582,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.462750 containerd[1500]: time="2025-01-30T06:17:40.462727154Z" level=error msg="Failed to destroy network for sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.463098 containerd[1500]: time="2025-01-30T06:17:40.463075737Z" level=error msg="encountered an error cleaning up failed sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.463217 containerd[1500]: time="2025-01-30T06:17:40.463196313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d85c8757-nkrq9,Uid:cf422376-3d63-4f20-9072-0d2f8b49abb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.463538 kubelet[2655]: E0130 06:17:40.463502 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.463925 kubelet[2655]: E0130 06:17:40.463641 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d85c8757-jvjkv" Jan 30 06:17:40.463925 kubelet[2655]: E0130 06:17:40.463511 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.463925 kubelet[2655]: E0130 06:17:40.463691 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d85c8757-nkrq9" Jan 30 06:17:40.463925 kubelet[2655]: E0130 06:17:40.463705 2655 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d85c8757-nkrq9" Jan 30 06:17:40.464066 containerd[1500]: time="2025-01-30T06:17:40.463634995Z" level=error msg="Failed to destroy network for sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.464143 kubelet[2655]: E0130 06:17:40.463734 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85d85c8757-nkrq9_calico-apiserver(cf422376-3d63-4f20-9072-0d2f8b49abb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85d85c8757-nkrq9_calico-apiserver(cf422376-3d63-4f20-9072-0d2f8b49abb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d85c8757-nkrq9" podUID="cf422376-3d63-4f20-9072-0d2f8b49abb2" Jan 30 06:17:40.464331 kubelet[2655]: E0130 06:17:40.464231 2655 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d85c8757-jvjkv" Jan 30 06:17:40.464331 kubelet[2655]: E0130 06:17:40.464302 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85d85c8757-jvjkv_calico-apiserver(f41a5a96-3618-4093-895b-a47df0dad582)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85d85c8757-jvjkv_calico-apiserver(f41a5a96-3618-4093-895b-a47df0dad582)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d85c8757-jvjkv" podUID="f41a5a96-3618-4093-895b-a47df0dad582" Jan 30 06:17:40.464954 containerd[1500]: time="2025-01-30T06:17:40.464916136Z" level=error msg="encountered an error cleaning up failed sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.465009 containerd[1500]: time="2025-01-30T06:17:40.464972612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvvt2,Uid:e4a0874f-e29f-4582-aee0-6703edd12dfa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.465307 kubelet[2655]: E0130 06:17:40.465193 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.465307 kubelet[2655]: E0130 06:17:40.465253 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pvvt2" Jan 30 06:17:40.465307 kubelet[2655]: E0130 06:17:40.465273 2655 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pvvt2" Jan 30 06:17:40.465492 kubelet[2655]: E0130 06:17:40.465462 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pvvt2_kube-system(e4a0874f-e29f-4582-aee0-6703edd12dfa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pvvt2_kube-system(e4a0874f-e29f-4582-aee0-6703edd12dfa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pvvt2" podUID="e4a0874f-e29f-4582-aee0-6703edd12dfa" Jan 30 06:17:40.478756 containerd[1500]: time="2025-01-30T06:17:40.478712373Z" level=error msg="Failed to destroy network for sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.479060 containerd[1500]: time="2025-01-30T06:17:40.479028105Z" level=error msg="encountered an error cleaning up failed sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.479257 containerd[1500]: time="2025-01-30T06:17:40.479074040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b57545dfb-9fz6l,Uid:41640a56-2c47-4ec4-99af-6f90c868637c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.479360 kubelet[2655]: E0130 06:17:40.479239 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.479360 kubelet[2655]: E0130 06:17:40.479270 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b57545dfb-9fz6l" Jan 30 06:17:40.479360 kubelet[2655]: E0130 06:17:40.479284 2655 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b57545dfb-9fz6l" Jan 30 06:17:40.479492 kubelet[2655]: E0130 06:17:40.479313 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b57545dfb-9fz6l_calico-system(41640a56-2c47-4ec4-99af-6f90c868637c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b57545dfb-9fz6l_calico-system(41640a56-2c47-4ec4-99af-6f90c868637c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b57545dfb-9fz6l" podUID="41640a56-2c47-4ec4-99af-6f90c868637c" Jan 30 06:17:40.858022 kubelet[2655]: I0130 06:17:40.856849 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:17:40.858687 kubelet[2655]: I0130 06:17:40.858655 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:17:40.863882 containerd[1500]: time="2025-01-30T06:17:40.862715463Z" level=info msg="StopPodSandbox for \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\"" Jan 30 06:17:40.863882 containerd[1500]: time="2025-01-30T06:17:40.863649985Z" level=info msg="StopPodSandbox for \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\"" Jan 30 06:17:40.864918 containerd[1500]: time="2025-01-30T06:17:40.864869189Z" level=info msg="Ensure that sandbox b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92 in task-service has been cleanup successfully" Jan 30 06:17:40.865187 containerd[1500]: time="2025-01-30T06:17:40.864875181Z" level=info msg="Ensure that sandbox 5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2 in task-service has been cleanup successfully" Jan 30 06:17:40.868733 kubelet[2655]: I0130 06:17:40.868699 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:17:40.870269 containerd[1500]: time="2025-01-30T06:17:40.870205100Z" level=info msg="StopPodSandbox for \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\"" Jan 30 06:17:40.873450 containerd[1500]: time="2025-01-30T06:17:40.873423671Z" level=info msg="Ensure that sandbox fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a in task-service has been cleanup successfully" Jan 30 06:17:40.883259 kubelet[2655]: I0130 06:17:40.882321 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:17:40.885849 containerd[1500]: time="2025-01-30T06:17:40.885822048Z" level=info msg="StopPodSandbox for \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\"" Jan 30 06:17:40.888170 containerd[1500]: time="2025-01-30T06:17:40.888088385Z" level=info msg="Ensure that sandbox 4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8 in task-service has been cleanup successfully" Jan 30 06:17:40.893153 kubelet[2655]: I0130 06:17:40.893135 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:17:40.895921 containerd[1500]: time="2025-01-30T06:17:40.895897902Z" level=info msg="StopPodSandbox for \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\"" Jan 30 06:17:40.901454 containerd[1500]: time="2025-01-30T06:17:40.901430561Z" level=info msg="Ensure that sandbox 8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36 in task-service has been cleanup successfully" Jan 30 06:17:40.938477 containerd[1500]: time="2025-01-30T06:17:40.938418722Z" level=error msg="StopPodSandbox for \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\" failed" error="failed to destroy network for sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.938685 kubelet[2655]: E0130 06:17:40.938649 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:17:40.938755 kubelet[2655]: E0130 06:17:40.938708 2655 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36"} Jan 30 06:17:40.939351 kubelet[2655]: E0130 06:17:40.938762 2655 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4a0874f-e29f-4582-aee0-6703edd12dfa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 06:17:40.939351 kubelet[2655]: E0130 06:17:40.938785 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4a0874f-e29f-4582-aee0-6703edd12dfa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pvvt2" podUID="e4a0874f-e29f-4582-aee0-6703edd12dfa" Jan 30 06:17:40.947327 containerd[1500]: time="2025-01-30T06:17:40.946176562Z" level=error msg="StopPodSandbox for \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\" failed" error="failed to destroy network for sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.947397 kubelet[2655]: E0130 06:17:40.946371 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:17:40.947397 kubelet[2655]: E0130 06:17:40.946430 2655 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2"} Jan 30 06:17:40.947397 kubelet[2655]: E0130 06:17:40.946798 2655 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41640a56-2c47-4ec4-99af-6f90c868637c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 06:17:40.947397 kubelet[2655]: E0130 06:17:40.947232 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41640a56-2c47-4ec4-99af-6f90c868637c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b57545dfb-9fz6l" podUID="41640a56-2c47-4ec4-99af-6f90c868637c" Jan 30 06:17:40.957515 containerd[1500]: time="2025-01-30T06:17:40.957379898Z" level=error msg="StopPodSandbox for \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\" failed" error="failed to destroy network for sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.957684 kubelet[2655]: E0130 06:17:40.957600 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:17:40.957684 kubelet[2655]: E0130 06:17:40.957642 2655 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a"} Jan 30 06:17:40.957684 kubelet[2655]: E0130 06:17:40.957672 2655 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e994494-74b7-4b5f-9da4-52083e102b26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 06:17:40.958256 kubelet[2655]: E0130 06:17:40.957692 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e994494-74b7-4b5f-9da4-52083e102b26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-twvvf" podUID="8e994494-74b7-4b5f-9da4-52083e102b26" Jan 30 06:17:40.964669 containerd[1500]: time="2025-01-30T06:17:40.964628323Z" level=error msg="StopPodSandbox for \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\" failed" error="failed to destroy network for sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.964897 kubelet[2655]: E0130 06:17:40.964860 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:17:40.964958 kubelet[2655]: E0130 06:17:40.964910 2655 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92"} Jan 30 06:17:40.964958 kubelet[2655]: E0130 06:17:40.964953 2655 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f41a5a96-3618-4093-895b-a47df0dad582\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 06:17:40.965093 kubelet[2655]: E0130 06:17:40.964985 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f41a5a96-3618-4093-895b-a47df0dad582\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d85c8757-jvjkv" podUID="f41a5a96-3618-4093-895b-a47df0dad582" Jan 30 06:17:40.968901 containerd[1500]: time="2025-01-30T06:17:40.968846600Z" level=error msg="StopPodSandbox for \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\" failed" error="failed to destroy network for sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:40.969007 kubelet[2655]: E0130 06:17:40.968974 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:17:40.969065 kubelet[2655]: E0130 06:17:40.969007 2655 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8"} Jan 30 06:17:40.969065 kubelet[2655]: E0130 06:17:40.969033 2655 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf422376-3d63-4f20-9072-0d2f8b49abb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 06:17:40.969065 kubelet[2655]: E0130 06:17:40.969051 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf422376-3d63-4f20-9072-0d2f8b49abb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d85c8757-nkrq9" podUID="cf422376-3d63-4f20-9072-0d2f8b49abb2" Jan 30 06:17:41.199971 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92-shm.mount: Deactivated successfully. Jan 30 06:17:41.201315 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2-shm.mount: Deactivated successfully. Jan 30 06:17:41.201467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a-shm.mount: Deactivated successfully. Jan 30 06:17:41.201662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8-shm.mount: Deactivated successfully. Jan 30 06:17:41.201744 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36-shm.mount: Deactivated successfully. Jan 30 06:17:41.739618 systemd[1]: Created slice kubepods-besteffort-pod60be3982_84c5_43fa_a1af_7b15bfd904a3.slice - libcontainer container kubepods-besteffort-pod60be3982_84c5_43fa_a1af_7b15bfd904a3.slice. Jan 30 06:17:41.741992 containerd[1500]: time="2025-01-30T06:17:41.741948557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghtnv,Uid:60be3982-84c5-43fa-a1af-7b15bfd904a3,Namespace:calico-system,Attempt:0,}" Jan 30 06:17:41.812795 containerd[1500]: time="2025-01-30T06:17:41.812736987Z" level=error msg="Failed to destroy network for sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:41.815028 containerd[1500]: time="2025-01-30T06:17:41.814952429Z" level=error msg="encountered an error cleaning up failed sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:41.814777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2-shm.mount: Deactivated successfully. Jan 30 06:17:41.815165 containerd[1500]: time="2025-01-30T06:17:41.815026950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghtnv,Uid:60be3982-84c5-43fa-a1af-7b15bfd904a3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:41.815285 kubelet[2655]: E0130 06:17:41.815245 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:41.815344 kubelet[2655]: E0130 06:17:41.815302 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghtnv" Jan 30 06:17:41.815344 kubelet[2655]: E0130 06:17:41.815335 2655 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghtnv" Jan 30 06:17:41.815392 kubelet[2655]: E0130 06:17:41.815368 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghtnv_calico-system(60be3982-84c5-43fa-a1af-7b15bfd904a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghtnv_calico-system(60be3982-84c5-43fa-a1af-7b15bfd904a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghtnv" podUID="60be3982-84c5-43fa-a1af-7b15bfd904a3" Jan 30 06:17:41.896607 kubelet[2655]: I0130 06:17:41.896552 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:17:41.897987 containerd[1500]: time="2025-01-30T06:17:41.897276941Z" level=info msg="StopPodSandbox for \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\"" Jan 30 06:17:41.897987 containerd[1500]: time="2025-01-30T06:17:41.897466607Z" level=info msg="Ensure that sandbox 236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2 in task-service has been cleanup successfully" Jan 30 06:17:41.927269 containerd[1500]: time="2025-01-30T06:17:41.927215052Z" level=error msg="StopPodSandbox for \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\" failed" error="failed to destroy network for sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 06:17:41.927511 kubelet[2655]: E0130 06:17:41.927455 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:17:41.927571 kubelet[2655]: E0130 06:17:41.927506 2655 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2"} Jan 30 06:17:41.927571 kubelet[2655]: E0130 06:17:41.927541 2655 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60be3982-84c5-43fa-a1af-7b15bfd904a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 06:17:41.927571 kubelet[2655]: E0130 06:17:41.927562 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60be3982-84c5-43fa-a1af-7b15bfd904a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghtnv" podUID="60be3982-84c5-43fa-a1af-7b15bfd904a3" Jan 30 06:17:46.784311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289536759.mount: Deactivated successfully. Jan 30 06:17:46.880732 containerd[1500]: time="2025-01-30T06:17:46.880600046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:46.881386 containerd[1500]: time="2025-01-30T06:17:46.881182617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 06:17:46.914026 containerd[1500]: time="2025-01-30T06:17:46.913140699Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:46.914026 containerd[1500]: time="2025-01-30T06:17:46.913863744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.056301077s" Jan 30 06:17:46.914026 containerd[1500]: time="2025-01-30T06:17:46.913898779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 06:17:46.914360 containerd[1500]: time="2025-01-30T06:17:46.914243034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:46.982601 containerd[1500]: time="2025-01-30T06:17:46.982531963Z" level=info msg="CreateContainer within sandbox \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 06:17:47.074174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount101895750.mount: Deactivated successfully. Jan 30 06:17:47.087344 containerd[1500]: time="2025-01-30T06:17:47.087308924Z" level=info msg="CreateContainer within sandbox \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\"" Jan 30 06:17:47.090670 containerd[1500]: time="2025-01-30T06:17:47.090365404Z" level=info msg="StartContainer for \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\"" Jan 30 06:17:47.221248 systemd[1]: Started cri-containerd-6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151.scope - libcontainer container 6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151. Jan 30 06:17:47.262310 containerd[1500]: time="2025-01-30T06:17:47.261905415Z" level=info msg="StartContainer for \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\" returns successfully" Jan 30 06:17:47.366721 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 06:17:47.369045 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 06:17:47.955163 kubelet[2655]: I0130 06:17:47.950364 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rv56k" podStartSLOduration=2.244744406 podStartE2EDuration="18.935413848s" podCreationTimestamp="2025-01-30 06:17:29 +0000 UTC" firstStartedPulling="2025-01-30 06:17:30.228906168 +0000 UTC m=+12.604476904" lastFinishedPulling="2025-01-30 06:17:46.91957561 +0000 UTC m=+29.295146346" observedRunningTime="2025-01-30 06:17:47.935052971 +0000 UTC m=+30.310623707" watchObservedRunningTime="2025-01-30 06:17:47.935413848 +0000 UTC m=+30.310984584" Jan 30 06:17:48.175311 kubelet[2655]: I0130 06:17:48.175244 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:17:48.917812 kubelet[2655]: I0130 06:17:48.917771 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:17:49.040731 kernel: bpftool[3919]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 06:17:49.304281 systemd-networkd[1401]: vxlan.calico: Link UP Jan 30 06:17:49.305796 systemd-networkd[1401]: vxlan.calico: Gained carrier Jan 30 06:17:50.861460 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Jan 30 06:17:51.737210 containerd[1500]: time="2025-01-30T06:17:51.735359806Z" level=info msg="StopPodSandbox for \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\"" Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:51.842 [INFO][4042] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:51.843 [INFO][4042] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" iface="eth0" netns="/var/run/netns/cni-3152232a-b3a4-d814-fc68-a514eeb54b50" Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:51.844 [INFO][4042] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" iface="eth0" netns="/var/run/netns/cni-3152232a-b3a4-d814-fc68-a514eeb54b50" Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:51.847 [INFO][4042] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" iface="eth0" netns="/var/run/netns/cni-3152232a-b3a4-d814-fc68-a514eeb54b50" Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:51.847 [INFO][4042] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:51.847 [INFO][4042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:52.004 [INFO][4048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" HandleID="k8s-pod-network.b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:52.007 [INFO][4048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:52.007 [INFO][4048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:52.020 [WARNING][4048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" HandleID="k8s-pod-network.b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:52.020 [INFO][4048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" HandleID="k8s-pod-network.b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:52.021 [INFO][4048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:52.027408 containerd[1500]: 2025-01-30 06:17:52.024 [INFO][4042] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:17:52.028504 containerd[1500]: time="2025-01-30T06:17:52.028256641Z" level=info msg="TearDown network for sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\" successfully" Jan 30 06:17:52.028504 containerd[1500]: time="2025-01-30T06:17:52.028285164Z" level=info msg="StopPodSandbox for \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\" returns successfully" Jan 30 06:17:52.030632 systemd[1]: run-netns-cni\x2d3152232a\x2db3a4\x2dd814\x2dfc68\x2da514eeb54b50.mount: Deactivated successfully. Jan 30 06:17:52.031334 containerd[1500]: time="2025-01-30T06:17:52.030657069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d85c8757-jvjkv,Uid:f41a5a96-3618-4093-895b-a47df0dad582,Namespace:calico-apiserver,Attempt:1,}" Jan 30 06:17:52.169962 systemd-networkd[1401]: calif27673ac49d: Link UP Jan 30 06:17:52.171152 systemd-networkd[1401]: calif27673ac49d: Gained carrier Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.087 [INFO][4056] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0 calico-apiserver-85d85c8757- calico-apiserver f41a5a96-3618-4093-895b-a47df0dad582 763 0 2025-01-30 06:17:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85d85c8757 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-a-a10ab07ed7 calico-apiserver-85d85c8757-jvjkv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif27673ac49d [] []}} ContainerID="3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-jvjkv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.087 [INFO][4056] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-jvjkv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.121 [INFO][4067] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" HandleID="k8s-pod-network.3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.130 [INFO][4067] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" HandleID="k8s-pod-network.3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050a50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-a-a10ab07ed7", "pod":"calico-apiserver-85d85c8757-jvjkv", "timestamp":"2025-01-30 06:17:52.121972112 +0000 UTC"}, Hostname:"ci-4081-3-0-a-a10ab07ed7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.130 [INFO][4067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.130 [INFO][4067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.130 [INFO][4067] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a-a10ab07ed7' Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.133 [INFO][4067] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.142 [INFO][4067] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.146 [INFO][4067] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.148 [INFO][4067] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.149 [INFO][4067] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.149 [INFO][4067] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.151 [INFO][4067] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.155 [INFO][4067] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.161 [INFO][4067] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.193/26] block=192.168.94.192/26 handle="k8s-pod-network.3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.161 [INFO][4067] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.193/26] handle="k8s-pod-network.3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.161 [INFO][4067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:52.188403 containerd[1500]: 2025-01-30 06:17:52.161 [INFO][4067] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.193/26] IPv6=[] ContainerID="3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" HandleID="k8s-pod-network.3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:17:52.190185 containerd[1500]: 2025-01-30 06:17:52.166 [INFO][4056] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-jvjkv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0", GenerateName:"calico-apiserver-85d85c8757-", Namespace:"calico-apiserver", SelfLink:"", UID:"f41a5a96-3618-4093-895b-a47df0dad582", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d85c8757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"", Pod:"calico-apiserver-85d85c8757-jvjkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif27673ac49d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:52.190185 containerd[1500]: 2025-01-30 06:17:52.166 [INFO][4056] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.193/32] ContainerID="3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-jvjkv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:17:52.190185 containerd[1500]: 2025-01-30 06:17:52.166 [INFO][4056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif27673ac49d ContainerID="3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-jvjkv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:17:52.190185 containerd[1500]: 2025-01-30 06:17:52.171 [INFO][4056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-jvjkv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:17:52.190185 containerd[1500]: 2025-01-30 06:17:52.171 [INFO][4056] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-jvjkv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0", GenerateName:"calico-apiserver-85d85c8757-", Namespace:"calico-apiserver", SelfLink:"", UID:"f41a5a96-3618-4093-895b-a47df0dad582", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d85c8757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a", Pod:"calico-apiserver-85d85c8757-jvjkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif27673ac49d", MAC:"c6:1f:22:73:9d:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:52.190185 containerd[1500]: 2025-01-30 06:17:52.181 [INFO][4056] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-jvjkv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:17:52.222617 containerd[1500]: time="2025-01-30T06:17:52.222387893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:52.222617 containerd[1500]: time="2025-01-30T06:17:52.222452424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:52.222617 containerd[1500]: time="2025-01-30T06:17:52.222465929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:52.222617 containerd[1500]: time="2025-01-30T06:17:52.222558112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:52.255249 systemd[1]: Started cri-containerd-3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a.scope - libcontainer container 3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a. Jan 30 06:17:52.298495 containerd[1500]: time="2025-01-30T06:17:52.298401993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d85c8757-jvjkv,Uid:f41a5a96-3618-4093-895b-a47df0dad582,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a\"" Jan 30 06:17:52.300848 containerd[1500]: time="2025-01-30T06:17:52.300805258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 06:17:52.734501 containerd[1500]: time="2025-01-30T06:17:52.734277619Z" level=info msg="StopPodSandbox for \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\"" Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.777 [INFO][4140] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.777 [INFO][4140] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" iface="eth0" netns="/var/run/netns/cni-2e921f57-ee95-bf96-7fdb-5148659c3985" Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.777 [INFO][4140] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" iface="eth0" netns="/var/run/netns/cni-2e921f57-ee95-bf96-7fdb-5148659c3985" Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.778 [INFO][4140] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" iface="eth0" netns="/var/run/netns/cni-2e921f57-ee95-bf96-7fdb-5148659c3985" Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.778 [INFO][4140] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.778 [INFO][4140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.802 [INFO][4146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" HandleID="k8s-pod-network.fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.802 [INFO][4146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.802 [INFO][4146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.808 [WARNING][4146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" HandleID="k8s-pod-network.fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.808 [INFO][4146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" HandleID="k8s-pod-network.fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.810 [INFO][4146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:52.815140 containerd[1500]: 2025-01-30 06:17:52.812 [INFO][4140] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:17:52.816912 containerd[1500]: time="2025-01-30T06:17:52.815306019Z" level=info msg="TearDown network for sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\" successfully" Jan 30 06:17:52.816912 containerd[1500]: time="2025-01-30T06:17:52.815329093Z" level=info msg="StopPodSandbox for \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\" returns successfully" Jan 30 06:17:52.816912 containerd[1500]: time="2025-01-30T06:17:52.816320340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-twvvf,Uid:8e994494-74b7-4b5f-9da4-52083e102b26,Namespace:kube-system,Attempt:1,}" Jan 30 06:17:52.912758 systemd-networkd[1401]: cali01f3e4ee1d7: Link UP Jan 30 06:17:52.913453 systemd-networkd[1401]: cali01f3e4ee1d7: Gained carrier Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.852 [INFO][4152] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0 coredns-668d6bf9bc- kube-system 8e994494-74b7-4b5f-9da4-52083e102b26 770 0 2025-01-30 06:17:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-a-a10ab07ed7 coredns-668d6bf9bc-twvvf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01f3e4ee1d7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" Namespace="kube-system" Pod="coredns-668d6bf9bc-twvvf" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.852 [INFO][4152] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" Namespace="kube-system" Pod="coredns-668d6bf9bc-twvvf" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.877 [INFO][4163] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" HandleID="k8s-pod-network.430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.884 [INFO][4163] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" HandleID="k8s-pod-network.430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d8c90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-a-a10ab07ed7", "pod":"coredns-668d6bf9bc-twvvf", "timestamp":"2025-01-30 06:17:52.877666454 +0000 UTC"}, Hostname:"ci-4081-3-0-a-a10ab07ed7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.884 [INFO][4163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.884 [INFO][4163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.884 [INFO][4163] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a-a10ab07ed7' Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.886 [INFO][4163] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.889 [INFO][4163] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.892 [INFO][4163] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.894 [INFO][4163] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.896 [INFO][4163] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.896 [INFO][4163] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.897 [INFO][4163] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10 Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.901 [INFO][4163] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.906 [INFO][4163] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.194/26] block=192.168.94.192/26 handle="k8s-pod-network.430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.906 [INFO][4163] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.194/26] handle="k8s-pod-network.430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.906 [INFO][4163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:52.934551 containerd[1500]: 2025-01-30 06:17:52.906 [INFO][4163] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.194/26] IPv6=[] ContainerID="430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" HandleID="k8s-pod-network.430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:17:52.935080 containerd[1500]: 2025-01-30 06:17:52.909 [INFO][4152] cni-plugin/k8s.go 386: Populated endpoint ContainerID="430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" Namespace="kube-system" Pod="coredns-668d6bf9bc-twvvf" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8e994494-74b7-4b5f-9da4-52083e102b26", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"", Pod:"coredns-668d6bf9bc-twvvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01f3e4ee1d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:52.935080 containerd[1500]: 2025-01-30 06:17:52.909 [INFO][4152] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.194/32] ContainerID="430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" Namespace="kube-system" Pod="coredns-668d6bf9bc-twvvf" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:17:52.935080 containerd[1500]: 2025-01-30 06:17:52.909 [INFO][4152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01f3e4ee1d7 ContainerID="430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" Namespace="kube-system" Pod="coredns-668d6bf9bc-twvvf" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:17:52.935080 containerd[1500]: 2025-01-30 06:17:52.913 [INFO][4152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" Namespace="kube-system" Pod="coredns-668d6bf9bc-twvvf" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:17:52.935080 containerd[1500]: 2025-01-30 06:17:52.914 [INFO][4152] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" Namespace="kube-system" Pod="coredns-668d6bf9bc-twvvf" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8e994494-74b7-4b5f-9da4-52083e102b26", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10", Pod:"coredns-668d6bf9bc-twvvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01f3e4ee1d7", MAC:"42:40:29:a0:38:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:52.935080 containerd[1500]: 2025-01-30 06:17:52.926 [INFO][4152] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10" Namespace="kube-system" Pod="coredns-668d6bf9bc-twvvf" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:17:52.960986 containerd[1500]: time="2025-01-30T06:17:52.960875030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:52.960986 containerd[1500]: time="2025-01-30T06:17:52.960948488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:52.960986 containerd[1500]: time="2025-01-30T06:17:52.960959579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:52.961312 containerd[1500]: time="2025-01-30T06:17:52.961253460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:52.980275 systemd[1]: Started cri-containerd-430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10.scope - libcontainer container 430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10. Jan 30 06:17:53.021402 containerd[1500]: time="2025-01-30T06:17:53.021349031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-twvvf,Uid:8e994494-74b7-4b5f-9da4-52083e102b26,Namespace:kube-system,Attempt:1,} returns sandbox id \"430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10\"" Jan 30 06:17:53.024991 containerd[1500]: time="2025-01-30T06:17:53.024813926Z" level=info msg="CreateContainer within sandbox \"430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 06:17:53.034842 systemd[1]: run-containerd-runc-k8s.io-3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a-runc.9FuS9N.mount: Deactivated successfully. Jan 30 06:17:53.035279 systemd[1]: run-netns-cni\x2d2e921f57\x2dee95\x2dbf96\x2d7fdb\x2d5148659c3985.mount: Deactivated successfully. Jan 30 06:17:53.046299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959325680.mount: Deactivated successfully. Jan 30 06:17:53.052678 containerd[1500]: time="2025-01-30T06:17:53.052647281Z" level=info msg="CreateContainer within sandbox \"430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83615b8d92de0f1d10c0b8f45a601de5276c74f13ac9332bd0a00f8fa0b8e014\"" Jan 30 06:17:53.054919 containerd[1500]: time="2025-01-30T06:17:53.054889104Z" level=info msg="StartContainer for \"83615b8d92de0f1d10c0b8f45a601de5276c74f13ac9332bd0a00f8fa0b8e014\"" Jan 30 06:17:53.055073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777687458.mount: Deactivated successfully. Jan 30 06:17:53.084271 systemd[1]: Started cri-containerd-83615b8d92de0f1d10c0b8f45a601de5276c74f13ac9332bd0a00f8fa0b8e014.scope - libcontainer container 83615b8d92de0f1d10c0b8f45a601de5276c74f13ac9332bd0a00f8fa0b8e014. Jan 30 06:17:53.116744 containerd[1500]: time="2025-01-30T06:17:53.116679593Z" level=info msg="StartContainer for \"83615b8d92de0f1d10c0b8f45a601de5276c74f13ac9332bd0a00f8fa0b8e014\" returns successfully" Jan 30 06:17:53.229320 systemd-networkd[1401]: calif27673ac49d: Gained IPv6LL Jan 30 06:17:53.976555 kubelet[2655]: I0130 06:17:53.976251 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-twvvf" podStartSLOduration=30.976238333 podStartE2EDuration="30.976238333s" podCreationTimestamp="2025-01-30 06:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 06:17:53.975748154 +0000 UTC m=+36.351318890" watchObservedRunningTime="2025-01-30 06:17:53.976238333 +0000 UTC m=+36.351809069" Jan 30 06:17:54.063484 systemd-networkd[1401]: cali01f3e4ee1d7: Gained IPv6LL Jan 30 06:17:54.733738 containerd[1500]: time="2025-01-30T06:17:54.733691581Z" level=info msg="StopPodSandbox for \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\"" Jan 30 06:17:54.735073 containerd[1500]: time="2025-01-30T06:17:54.734848870Z" level=info msg="StopPodSandbox for \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\"" Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.811 [INFO][4302] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.811 [INFO][4302] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" iface="eth0" netns="/var/run/netns/cni-b0993e1d-ec9e-8e7f-b2b3-3e3f14ab6185" Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.811 [INFO][4302] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" iface="eth0" netns="/var/run/netns/cni-b0993e1d-ec9e-8e7f-b2b3-3e3f14ab6185" Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.812 [INFO][4302] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" iface="eth0" netns="/var/run/netns/cni-b0993e1d-ec9e-8e7f-b2b3-3e3f14ab6185" Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.812 [INFO][4302] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.812 [INFO][4302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.851 [INFO][4314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" HandleID="k8s-pod-network.4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.852 [INFO][4314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.852 [INFO][4314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.859 [WARNING][4314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" HandleID="k8s-pod-network.4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.859 [INFO][4314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" HandleID="k8s-pod-network.4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.867 [INFO][4314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:54.879169 containerd[1500]: 2025-01-30 06:17:54.876 [INFO][4302] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:17:54.880769 containerd[1500]: time="2025-01-30T06:17:54.879956385Z" level=info msg="TearDown network for sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\" successfully" Jan 30 06:17:54.880769 containerd[1500]: time="2025-01-30T06:17:54.879981942Z" level=info msg="StopPodSandbox for \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\" returns successfully" Jan 30 06:17:54.881673 containerd[1500]: time="2025-01-30T06:17:54.881305553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d85c8757-nkrq9,Uid:cf422376-3d63-4f20-9072-0d2f8b49abb2,Namespace:calico-apiserver,Attempt:1,}" Jan 30 06:17:54.884344 systemd[1]: run-netns-cni\x2db0993e1d\x2dec9e\x2d8e7f\x2db2b3\x2d3e3f14ab6185.mount: Deactivated successfully. Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.836 [INFO][4301] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.837 [INFO][4301] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" iface="eth0" netns="/var/run/netns/cni-817f8fed-9b51-5229-4fe1-584323172e59" Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.837 [INFO][4301] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" iface="eth0" netns="/var/run/netns/cni-817f8fed-9b51-5229-4fe1-584323172e59" Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.837 [INFO][4301] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" iface="eth0" netns="/var/run/netns/cni-817f8fed-9b51-5229-4fe1-584323172e59" Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.837 [INFO][4301] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.837 [INFO][4301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.891 [INFO][4319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" HandleID="k8s-pod-network.236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.891 [INFO][4319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.891 [INFO][4319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.907 [WARNING][4319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" HandleID="k8s-pod-network.236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.907 [INFO][4319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" HandleID="k8s-pod-network.236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.909 [INFO][4319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:54.916415 containerd[1500]: 2025-01-30 06:17:54.913 [INFO][4301] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:17:54.920254 containerd[1500]: time="2025-01-30T06:17:54.918745988Z" level=info msg="TearDown network for sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\" successfully" Jan 30 06:17:54.920254 containerd[1500]: time="2025-01-30T06:17:54.918792334Z" level=info msg="StopPodSandbox for \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\" returns successfully" Jan 30 06:17:54.920800 containerd[1500]: time="2025-01-30T06:17:54.920766515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghtnv,Uid:60be3982-84c5-43fa-a1af-7b15bfd904a3,Namespace:calico-system,Attempt:1,}" Jan 30 06:17:54.922569 systemd[1]: run-netns-cni\x2d817f8fed\x2d9b51\x2d5229\x2d4fe1\x2d584323172e59.mount: Deactivated successfully. Jan 30 06:17:55.076291 systemd-networkd[1401]: cali812f35c6775: Link UP Jan 30 06:17:55.076657 systemd-networkd[1401]: cali812f35c6775: Gained carrier Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:54.956 [INFO][4326] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0 calico-apiserver-85d85c8757- calico-apiserver cf422376-3d63-4f20-9072-0d2f8b49abb2 790 0 2025-01-30 06:17:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85d85c8757 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-a-a10ab07ed7 calico-apiserver-85d85c8757-nkrq9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali812f35c6775 [] []}} ContainerID="100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-nkrq9" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:54.956 [INFO][4326] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-nkrq9" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.003 [INFO][4342] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" HandleID="k8s-pod-network.100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.019 [INFO][4342] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" HandleID="k8s-pod-network.100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318b30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-a-a10ab07ed7", "pod":"calico-apiserver-85d85c8757-nkrq9", "timestamp":"2025-01-30 06:17:55.003603089 +0000 UTC"}, Hostname:"ci-4081-3-0-a-a10ab07ed7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.019 [INFO][4342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.019 [INFO][4342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.019 [INFO][4342] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a-a10ab07ed7' Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.024 [INFO][4342] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.035 [INFO][4342] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.043 [INFO][4342] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.047 [INFO][4342] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.051 [INFO][4342] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.051 [INFO][4342] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.053 [INFO][4342] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.060 [INFO][4342] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.067 [INFO][4342] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.195/26] block=192.168.94.192/26 handle="k8s-pod-network.100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.067 [INFO][4342] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.195/26] handle="k8s-pod-network.100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.067 [INFO][4342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:55.093034 containerd[1500]: 2025-01-30 06:17:55.068 [INFO][4342] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.195/26] IPv6=[] ContainerID="100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" HandleID="k8s-pod-network.100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:17:55.094396 containerd[1500]: 2025-01-30 06:17:55.071 [INFO][4326] cni-plugin/k8s.go 386: Populated endpoint ContainerID="100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-nkrq9" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0", GenerateName:"calico-apiserver-85d85c8757-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf422376-3d63-4f20-9072-0d2f8b49abb2", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d85c8757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"", Pod:"calico-apiserver-85d85c8757-nkrq9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali812f35c6775", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:55.094396 containerd[1500]: 2025-01-30 06:17:55.071 [INFO][4326] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.195/32] ContainerID="100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-nkrq9" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:17:55.094396 containerd[1500]: 2025-01-30 06:17:55.071 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali812f35c6775 ContainerID="100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-nkrq9" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:17:55.094396 containerd[1500]: 2025-01-30 06:17:55.077 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-nkrq9" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:17:55.094396 containerd[1500]: 2025-01-30 06:17:55.078 [INFO][4326] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-nkrq9" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0", GenerateName:"calico-apiserver-85d85c8757-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf422376-3d63-4f20-9072-0d2f8b49abb2", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d85c8757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e", Pod:"calico-apiserver-85d85c8757-nkrq9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali812f35c6775", MAC:"5a:9a:a6:9b:52:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:55.094396 containerd[1500]: 2025-01-30 06:17:55.089 [INFO][4326] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e" Namespace="calico-apiserver" Pod="calico-apiserver-85d85c8757-nkrq9" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:17:55.131749 containerd[1500]: time="2025-01-30T06:17:55.131140169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:55.132840 containerd[1500]: time="2025-01-30T06:17:55.132761898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:55.132840 containerd[1500]: time="2025-01-30T06:17:55.132786905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:55.133276 containerd[1500]: time="2025-01-30T06:17:55.132874770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:55.167470 systemd[1]: Started cri-containerd-100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e.scope - libcontainer container 100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e. Jan 30 06:17:55.201203 systemd-networkd[1401]: cali244ead7c958: Link UP Jan 30 06:17:55.202452 systemd-networkd[1401]: cali244ead7c958: Gained carrier Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.014 [INFO][4337] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0 csi-node-driver- calico-system 60be3982-84c5-43fa-a1af-7b15bfd904a3 791 0 2025-01-30 06:17:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-a-a10ab07ed7 csi-node-driver-ghtnv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali244ead7c958 [] []}} ContainerID="c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" Namespace="calico-system" Pod="csi-node-driver-ghtnv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.014 [INFO][4337] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" Namespace="calico-system" Pod="csi-node-driver-ghtnv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.060 [INFO][4356] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" HandleID="k8s-pod-network.c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.123 [INFO][4356] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" HandleID="k8s-pod-network.c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051460), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-a-a10ab07ed7", "pod":"csi-node-driver-ghtnv", "timestamp":"2025-01-30 06:17:55.060702707 +0000 UTC"}, Hostname:"ci-4081-3-0-a-a10ab07ed7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.123 [INFO][4356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.123 [INFO][4356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.123 [INFO][4356] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a-a10ab07ed7' Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.127 [INFO][4356] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.139 [INFO][4356] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.153 [INFO][4356] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.159 [INFO][4356] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.164 [INFO][4356] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.165 [INFO][4356] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.168 [INFO][4356] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75 Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.179 [INFO][4356] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.187 [INFO][4356] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.196/26] block=192.168.94.192/26 handle="k8s-pod-network.c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.187 [INFO][4356] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.196/26] handle="k8s-pod-network.c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.187 [INFO][4356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:55.227203 containerd[1500]: 2025-01-30 06:17:55.188 [INFO][4356] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.196/26] IPv6=[] ContainerID="c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" HandleID="k8s-pod-network.c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:17:55.228162 containerd[1500]: 2025-01-30 06:17:55.191 [INFO][4337] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" Namespace="calico-system" Pod="csi-node-driver-ghtnv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"60be3982-84c5-43fa-a1af-7b15bfd904a3", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"", Pod:"csi-node-driver-ghtnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali244ead7c958", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:55.228162 containerd[1500]: 2025-01-30 06:17:55.191 [INFO][4337] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.196/32] ContainerID="c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" Namespace="calico-system" Pod="csi-node-driver-ghtnv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:17:55.228162 containerd[1500]: 2025-01-30 06:17:55.191 [INFO][4337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali244ead7c958 ContainerID="c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" Namespace="calico-system" Pod="csi-node-driver-ghtnv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:17:55.228162 containerd[1500]: 2025-01-30 06:17:55.203 [INFO][4337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" Namespace="calico-system" Pod="csi-node-driver-ghtnv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:17:55.228162 containerd[1500]: 2025-01-30 06:17:55.205 [INFO][4337] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" Namespace="calico-system" Pod="csi-node-driver-ghtnv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"60be3982-84c5-43fa-a1af-7b15bfd904a3", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75", Pod:"csi-node-driver-ghtnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali244ead7c958", MAC:"ca:d2:d1:41:fa:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:55.228162 containerd[1500]: 2025-01-30 06:17:55.218 [INFO][4337] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75" Namespace="calico-system" Pod="csi-node-driver-ghtnv" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:17:55.270579 containerd[1500]: time="2025-01-30T06:17:55.270540918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d85c8757-nkrq9,Uid:cf422376-3d63-4f20-9072-0d2f8b49abb2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e\"" Jan 30 06:17:55.281403 containerd[1500]: time="2025-01-30T06:17:55.281213856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:55.281403 containerd[1500]: time="2025-01-30T06:17:55.281270381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:55.281403 containerd[1500]: time="2025-01-30T06:17:55.281279829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:55.284147 containerd[1500]: time="2025-01-30T06:17:55.281366582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:55.312328 systemd[1]: Started cri-containerd-c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75.scope - libcontainer container c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75. Jan 30 06:17:55.353678 containerd[1500]: time="2025-01-30T06:17:55.353592456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghtnv,Uid:60be3982-84c5-43fa-a1af-7b15bfd904a3,Namespace:calico-system,Attempt:1,} returns sandbox id \"c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75\"" Jan 30 06:17:55.445747 containerd[1500]: time="2025-01-30T06:17:55.445680804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:55.446415 containerd[1500]: time="2025-01-30T06:17:55.446377961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 06:17:55.447722 containerd[1500]: time="2025-01-30T06:17:55.447168673Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:55.450089 containerd[1500]: time="2025-01-30T06:17:55.450026400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:55.450596 containerd[1500]: time="2025-01-30T06:17:55.450559239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.149661178s" Jan 30 06:17:55.450646 containerd[1500]: time="2025-01-30T06:17:55.450600627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 06:17:55.453150 containerd[1500]: time="2025-01-30T06:17:55.452203881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 06:17:55.453345 containerd[1500]: time="2025-01-30T06:17:55.453308121Z" level=info msg="CreateContainer within sandbox \"3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 06:17:55.469757 containerd[1500]: time="2025-01-30T06:17:55.469720889Z" level=info msg="CreateContainer within sandbox \"3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7b3ca98f0989d341bca70f5ea3f1712d0146d0165b9d1d817f187c4ec3c92591\"" Jan 30 06:17:55.470270 containerd[1500]: time="2025-01-30T06:17:55.470239762Z" level=info msg="StartContainer for \"7b3ca98f0989d341bca70f5ea3f1712d0146d0165b9d1d817f187c4ec3c92591\"" Jan 30 06:17:55.497255 systemd[1]: Started cri-containerd-7b3ca98f0989d341bca70f5ea3f1712d0146d0165b9d1d817f187c4ec3c92591.scope - libcontainer container 7b3ca98f0989d341bca70f5ea3f1712d0146d0165b9d1d817f187c4ec3c92591. Jan 30 06:17:55.541067 containerd[1500]: time="2025-01-30T06:17:55.539860473Z" level=info msg="StartContainer for \"7b3ca98f0989d341bca70f5ea3f1712d0146d0165b9d1d817f187c4ec3c92591\" returns successfully" Jan 30 06:17:55.737393 containerd[1500]: time="2025-01-30T06:17:55.737260186Z" level=info msg="StopPodSandbox for \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\"" Jan 30 06:17:55.738419 containerd[1500]: time="2025-01-30T06:17:55.738384804Z" level=info msg="StopPodSandbox for \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\"" Jan 30 06:17:55.822137 containerd[1500]: time="2025-01-30T06:17:55.820419737Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:55.822137 containerd[1500]: time="2025-01-30T06:17:55.821497627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 06:17:55.822762 containerd[1500]: time="2025-01-30T06:17:55.822724576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 370.487783ms" Jan 30 06:17:55.822762 containerd[1500]: time="2025-01-30T06:17:55.822757738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 06:17:55.824770 containerd[1500]: time="2025-01-30T06:17:55.824553215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 06:17:55.826501 containerd[1500]: time="2025-01-30T06:17:55.826479696Z" level=info msg="CreateContainer within sandbox \"100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 06:17:55.901160 containerd[1500]: time="2025-01-30T06:17:55.901119695Z" level=info msg="CreateContainer within sandbox \"100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b60defaefa57457dba7753eb86eb974f881bf1b5854ccbbd2fee579af2d3a7ef\"" Jan 30 06:17:55.903059 containerd[1500]: time="2025-01-30T06:17:55.903036959Z" level=info msg="StartContainer for \"b60defaefa57457dba7753eb86eb974f881bf1b5854ccbbd2fee579af2d3a7ef\"" Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.833 [INFO][4538] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.833 [INFO][4538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" iface="eth0" netns="/var/run/netns/cni-1deab8f5-b8e1-82d8-01d1-a7ded5489467" Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.834 [INFO][4538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" iface="eth0" netns="/var/run/netns/cni-1deab8f5-b8e1-82d8-01d1-a7ded5489467" Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.834 [INFO][4538] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" iface="eth0" netns="/var/run/netns/cni-1deab8f5-b8e1-82d8-01d1-a7ded5489467" Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.835 [INFO][4538] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.835 [INFO][4538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.903 [INFO][4550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" HandleID="k8s-pod-network.5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.904 [INFO][4550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.904 [INFO][4550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.915 [WARNING][4550] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" HandleID="k8s-pod-network.5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.915 [INFO][4550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" HandleID="k8s-pod-network.5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.921 [INFO][4550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:55.933564 containerd[1500]: 2025-01-30 06:17:55.926 [INFO][4538] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:17:55.935414 containerd[1500]: time="2025-01-30T06:17:55.933794848Z" level=info msg="TearDown network for sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\" successfully" Jan 30 06:17:55.935414 containerd[1500]: time="2025-01-30T06:17:55.933821117Z" level=info msg="StopPodSandbox for \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\" returns successfully" Jan 30 06:17:55.935414 containerd[1500]: time="2025-01-30T06:17:55.934684125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b57545dfb-9fz6l,Uid:41640a56-2c47-4ec4-99af-6f90c868637c,Namespace:calico-system,Attempt:1,}" Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.838 [INFO][4537] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.839 [INFO][4537] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" iface="eth0" netns="/var/run/netns/cni-109fa017-6068-be83-4145-53130fe2282b" Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.839 [INFO][4537] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" iface="eth0" netns="/var/run/netns/cni-109fa017-6068-be83-4145-53130fe2282b" Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.840 [INFO][4537] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" iface="eth0" netns="/var/run/netns/cni-109fa017-6068-be83-4145-53130fe2282b" Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.840 [INFO][4537] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.840 [INFO][4537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.917 [INFO][4554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" HandleID="k8s-pod-network.8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.917 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.921 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.929 [WARNING][4554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" HandleID="k8s-pod-network.8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.929 [INFO][4554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" HandleID="k8s-pod-network.8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.931 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:55.943954 containerd[1500]: 2025-01-30 06:17:55.935 [INFO][4537] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:17:55.946403 containerd[1500]: time="2025-01-30T06:17:55.946369581Z" level=info msg="TearDown network for sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\" successfully" Jan 30 06:17:55.946514 containerd[1500]: time="2025-01-30T06:17:55.946488414Z" level=info msg="StopPodSandbox for \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\" returns successfully" Jan 30 06:17:55.947248 containerd[1500]: time="2025-01-30T06:17:55.947225115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvvt2,Uid:e4a0874f-e29f-4582-aee0-6703edd12dfa,Namespace:kube-system,Attempt:1,}" Jan 30 06:17:55.980419 systemd[1]: Started cri-containerd-b60defaefa57457dba7753eb86eb974f881bf1b5854ccbbd2fee579af2d3a7ef.scope - libcontainer container b60defaefa57457dba7753eb86eb974f881bf1b5854ccbbd2fee579af2d3a7ef. Jan 30 06:17:56.003086 systemd[1]: Started sshd@10-78.47.103.36:22-125.74.237.67:40922.service - OpenSSH per-connection server daemon (125.74.237.67:40922). Jan 30 06:17:56.042081 systemd[1]: run-netns-cni\x2d1deab8f5\x2db8e1\x2d82d8\x2d01d1\x2da7ded5489467.mount: Deactivated successfully. Jan 30 06:17:56.042379 systemd[1]: run-netns-cni\x2d109fa017\x2d6068\x2dbe83\x2d4145\x2d53130fe2282b.mount: Deactivated successfully. Jan 30 06:17:56.175522 containerd[1500]: time="2025-01-30T06:17:56.175451379Z" level=info msg="StartContainer for \"b60defaefa57457dba7753eb86eb974f881bf1b5854ccbbd2fee579af2d3a7ef\" returns successfully" Jan 30 06:17:56.264386 systemd-networkd[1401]: cali20eec47e75b: Link UP Jan 30 06:17:56.264585 systemd-networkd[1401]: cali20eec47e75b: Gained carrier Jan 30 06:17:56.278024 kubelet[2655]: I0130 06:17:56.277953 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85d85c8757-jvjkv" podStartSLOduration=23.126408934 podStartE2EDuration="26.277923275s" podCreationTimestamp="2025-01-30 06:17:30 +0000 UTC" firstStartedPulling="2025-01-30 06:17:52.299907495 +0000 UTC m=+34.675478231" lastFinishedPulling="2025-01-30 06:17:55.451421835 +0000 UTC m=+37.826992572" observedRunningTime="2025-01-30 06:17:55.96770221 +0000 UTC m=+38.343272977" watchObservedRunningTime="2025-01-30 06:17:56.277923275 +0000 UTC m=+38.653494011" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.085 [INFO][4583] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0 calico-kube-controllers-6b57545dfb- calico-system 41640a56-2c47-4ec4-99af-6f90c868637c 808 0 2025-01-30 06:17:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b57545dfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-a-a10ab07ed7 calico-kube-controllers-6b57545dfb-9fz6l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali20eec47e75b [] []}} ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Namespace="calico-system" Pod="calico-kube-controllers-6b57545dfb-9fz6l" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.086 [INFO][4583] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Namespace="calico-system" Pod="calico-kube-controllers-6b57545dfb-9fz6l" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.125 [INFO][4618] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.144 [INFO][4618] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-a-a10ab07ed7", "pod":"calico-kube-controllers-6b57545dfb-9fz6l", "timestamp":"2025-01-30 06:17:56.125368293 +0000 UTC"}, Hostname:"ci-4081-3-0-a-a10ab07ed7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.144 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.149 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.149 [INFO][4618] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a-a10ab07ed7' Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.157 [INFO][4618] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.235 [INFO][4618] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.243 [INFO][4618] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.244 [INFO][4618] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.247 [INFO][4618] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.247 [INFO][4618] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.248 [INFO][4618] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3 Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.252 [INFO][4618] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.257 [INFO][4618] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.197/26] block=192.168.94.192/26 handle="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.257 [INFO][4618] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.197/26] handle="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.257 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:56.283728 containerd[1500]: 2025-01-30 06:17:56.257 [INFO][4618] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.197/26] IPv6=[] ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:17:56.284622 containerd[1500]: 2025-01-30 06:17:56.259 [INFO][4583] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Namespace="calico-system" Pod="calico-kube-controllers-6b57545dfb-9fz6l" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0", GenerateName:"calico-kube-controllers-6b57545dfb-", Namespace:"calico-system", SelfLink:"", UID:"41640a56-2c47-4ec4-99af-6f90c868637c", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b57545dfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"", Pod:"calico-kube-controllers-6b57545dfb-9fz6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20eec47e75b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:56.284622 containerd[1500]: 2025-01-30 06:17:56.259 [INFO][4583] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.197/32] ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Namespace="calico-system" Pod="calico-kube-controllers-6b57545dfb-9fz6l" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:17:56.284622 containerd[1500]: 2025-01-30 06:17:56.259 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20eec47e75b ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Namespace="calico-system" Pod="calico-kube-controllers-6b57545dfb-9fz6l" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:17:56.284622 containerd[1500]: 2025-01-30 06:17:56.262 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Namespace="calico-system" Pod="calico-kube-controllers-6b57545dfb-9fz6l" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:17:56.284622 containerd[1500]: 2025-01-30 06:17:56.262 [INFO][4583] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Namespace="calico-system" Pod="calico-kube-controllers-6b57545dfb-9fz6l" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0", GenerateName:"calico-kube-controllers-6b57545dfb-", Namespace:"calico-system", SelfLink:"", UID:"41640a56-2c47-4ec4-99af-6f90c868637c", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b57545dfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3", Pod:"calico-kube-controllers-6b57545dfb-9fz6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20eec47e75b", MAC:"de:e2:19:8b:f4:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:56.284622 containerd[1500]: 2025-01-30 06:17:56.276 [INFO][4583] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Namespace="calico-system" Pod="calico-kube-controllers-6b57545dfb-9fz6l" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:17:56.326230 containerd[1500]: time="2025-01-30T06:17:56.325912867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:56.326230 containerd[1500]: time="2025-01-30T06:17:56.325971186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:56.326230 containerd[1500]: time="2025-01-30T06:17:56.325984170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:56.326230 containerd[1500]: time="2025-01-30T06:17:56.326079740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:56.365272 systemd-networkd[1401]: cali812f35c6775: Gained IPv6LL Jan 30 06:17:56.375262 systemd[1]: Started cri-containerd-f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3.scope - libcontainer container f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3. Jan 30 06:17:56.404726 systemd-networkd[1401]: cali3dd2d35c3c3: Link UP Jan 30 06:17:56.405062 systemd-networkd[1401]: cali3dd2d35c3c3: Gained carrier Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.107 [INFO][4594] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0 coredns-668d6bf9bc- kube-system e4a0874f-e29f-4582-aee0-6703edd12dfa 809 0 2025-01-30 06:17:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-a-a10ab07ed7 coredns-668d6bf9bc-pvvt2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3dd2d35c3c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvvt2" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.107 [INFO][4594] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvvt2" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.175 [INFO][4623] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" HandleID="k8s-pod-network.d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.243 [INFO][4623] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" HandleID="k8s-pod-network.d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318ab0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-a-a10ab07ed7", "pod":"coredns-668d6bf9bc-pvvt2", "timestamp":"2025-01-30 06:17:56.175744148 +0000 UTC"}, Hostname:"ci-4081-3-0-a-a10ab07ed7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.243 [INFO][4623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.257 [INFO][4623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.257 [INFO][4623] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a-a10ab07ed7' Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.266 [INFO][4623] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.338 [INFO][4623] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.349 [INFO][4623] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.354 [INFO][4623] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.361 [INFO][4623] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.361 [INFO][4623] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.364 [INFO][4623] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0 Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.383 [INFO][4623] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.393 [INFO][4623] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.198/26] block=192.168.94.192/26 handle="k8s-pod-network.d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.393 [INFO][4623] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.198/26] handle="k8s-pod-network.d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.394 [INFO][4623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:17:56.423493 containerd[1500]: 2025-01-30 06:17:56.394 [INFO][4623] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.198/26] IPv6=[] ContainerID="d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" HandleID="k8s-pod-network.d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:17:56.424148 containerd[1500]: 2025-01-30 06:17:56.398 [INFO][4594] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvvt2" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e4a0874f-e29f-4582-aee0-6703edd12dfa", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"", Pod:"coredns-668d6bf9bc-pvvt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dd2d35c3c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:56.424148 containerd[1500]: 2025-01-30 06:17:56.399 [INFO][4594] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.198/32] ContainerID="d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvvt2" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:17:56.424148 containerd[1500]: 2025-01-30 06:17:56.399 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dd2d35c3c3 ContainerID="d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvvt2" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:17:56.424148 containerd[1500]: 2025-01-30 06:17:56.404 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvvt2" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:17:56.424148 containerd[1500]: 2025-01-30 06:17:56.404 [INFO][4594] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvvt2" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e4a0874f-e29f-4582-aee0-6703edd12dfa", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0", Pod:"coredns-668d6bf9bc-pvvt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dd2d35c3c3", MAC:"1e:07:ca:80:20:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:17:56.424148 containerd[1500]: 2025-01-30 06:17:56.417 [INFO][4594] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvvt2" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:17:56.467747 containerd[1500]: time="2025-01-30T06:17:56.467330380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:17:56.467747 containerd[1500]: time="2025-01-30T06:17:56.467394490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:17:56.467747 containerd[1500]: time="2025-01-30T06:17:56.467407765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:56.467747 containerd[1500]: time="2025-01-30T06:17:56.467493817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:17:56.505458 systemd[1]: Started cri-containerd-d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0.scope - libcontainer container d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0. Jan 30 06:17:56.572252 containerd[1500]: time="2025-01-30T06:17:56.572208517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvvt2,Uid:e4a0874f-e29f-4582-aee0-6703edd12dfa,Namespace:kube-system,Attempt:1,} returns sandbox id \"d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0\"" Jan 30 06:17:56.581008 containerd[1500]: time="2025-01-30T06:17:56.578299695Z" level=info msg="CreateContainer within sandbox \"d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 06:17:56.588688 containerd[1500]: time="2025-01-30T06:17:56.588634549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b57545dfb-9fz6l,Uid:41640a56-2c47-4ec4-99af-6f90c868637c,Namespace:calico-system,Attempt:1,} returns sandbox id \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\"" Jan 30 06:17:56.600441 containerd[1500]: time="2025-01-30T06:17:56.600396367Z" level=info msg="CreateContainer within sandbox \"d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7e1eedb5b0a5752ece26a771255aa908af07914c02114f1d565c84c0f26c370\"" Jan 30 06:17:56.601611 containerd[1500]: time="2025-01-30T06:17:56.601574416Z" level=info msg="StartContainer for \"f7e1eedb5b0a5752ece26a771255aa908af07914c02114f1d565c84c0f26c370\"" Jan 30 06:17:56.649965 systemd[1]: Started cri-containerd-f7e1eedb5b0a5752ece26a771255aa908af07914c02114f1d565c84c0f26c370.scope - libcontainer container f7e1eedb5b0a5752ece26a771255aa908af07914c02114f1d565c84c0f26c370. Jan 30 06:17:56.704972 containerd[1500]: time="2025-01-30T06:17:56.704909378Z" level=info msg="StartContainer for \"f7e1eedb5b0a5752ece26a771255aa908af07914c02114f1d565c84c0f26c370\" returns successfully" Jan 30 06:17:56.750038 systemd-networkd[1401]: cali244ead7c958: Gained IPv6LL Jan 30 06:17:56.993079 kubelet[2655]: I0130 06:17:56.992268 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85d85c8757-nkrq9" podStartSLOduration=26.443920788 podStartE2EDuration="26.992252416s" podCreationTimestamp="2025-01-30 06:17:30 +0000 UTC" firstStartedPulling="2025-01-30 06:17:55.275366514 +0000 UTC m=+37.650937250" lastFinishedPulling="2025-01-30 06:17:55.823698142 +0000 UTC m=+38.199268878" observedRunningTime="2025-01-30 06:17:56.979192303 +0000 UTC m=+39.354763040" watchObservedRunningTime="2025-01-30 06:17:56.992252416 +0000 UTC m=+39.367823162" Jan 30 06:17:56.994131 kubelet[2655]: I0130 06:17:56.994064 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pvvt2" podStartSLOduration=33.99405203 podStartE2EDuration="33.99405203s" podCreationTimestamp="2025-01-30 06:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 06:17:56.993200013 +0000 UTC m=+39.368770749" watchObservedRunningTime="2025-01-30 06:17:56.99405203 +0000 UTC m=+39.369622776" Jan 30 06:17:57.473790 containerd[1500]: time="2025-01-30T06:17:57.473722047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:57.474781 containerd[1500]: time="2025-01-30T06:17:57.474739344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 06:17:57.475646 containerd[1500]: time="2025-01-30T06:17:57.475608553Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:57.477557 containerd[1500]: time="2025-01-30T06:17:57.477519675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:17:57.478015 containerd[1500]: time="2025-01-30T06:17:57.477984717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.653398651s" Jan 30 06:17:57.478057 containerd[1500]: time="2025-01-30T06:17:57.478015856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 06:17:57.479313 containerd[1500]: time="2025-01-30T06:17:57.479294953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 06:17:57.480537 containerd[1500]: time="2025-01-30T06:17:57.480509801Z" level=info msg="CreateContainer within sandbox \"c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 06:17:57.501201 containerd[1500]: time="2025-01-30T06:17:57.501142269Z" level=info msg="CreateContainer within sandbox \"c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3170744d9f4d11b93c680a814a817e0863dbc8a7405da39964683f1e26350af6\"" Jan 30 06:17:57.503008 containerd[1500]: time="2025-01-30T06:17:57.502973642Z" level=info msg="StartContainer for \"3170744d9f4d11b93c680a814a817e0863dbc8a7405da39964683f1e26350af6\"" Jan 30 06:17:57.542263 systemd[1]: Started cri-containerd-3170744d9f4d11b93c680a814a817e0863dbc8a7405da39964683f1e26350af6.scope - libcontainer container 3170744d9f4d11b93c680a814a817e0863dbc8a7405da39964683f1e26350af6. Jan 30 06:17:57.572351 containerd[1500]: time="2025-01-30T06:17:57.572194095Z" level=info msg="StartContainer for \"3170744d9f4d11b93c680a814a817e0863dbc8a7405da39964683f1e26350af6\" returns successfully" Jan 30 06:17:57.709693 systemd-networkd[1401]: cali3dd2d35c3c3: Gained IPv6LL Jan 30 06:17:57.987607 kubelet[2655]: I0130 06:17:57.987333 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:17:58.093421 systemd-networkd[1401]: cali20eec47e75b: Gained IPv6LL Jan 30 06:18:00.216656 containerd[1500]: time="2025-01-30T06:18:00.216534677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:18:00.218071 containerd[1500]: time="2025-01-30T06:18:00.217967694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 06:18:00.219448 containerd[1500]: time="2025-01-30T06:18:00.219393537Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:18:00.222356 containerd[1500]: time="2025-01-30T06:18:00.222297731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:18:00.223634 containerd[1500]: time="2025-01-30T06:18:00.223227714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.743851187s" Jan 30 06:18:00.223634 containerd[1500]: time="2025-01-30T06:18:00.223269893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 06:18:00.224319 containerd[1500]: time="2025-01-30T06:18:00.224283844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 06:18:00.243181 containerd[1500]: time="2025-01-30T06:18:00.242807069Z" level=info msg="CreateContainer within sandbox \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 06:18:00.271967 containerd[1500]: time="2025-01-30T06:18:00.271920258Z" level=info msg="CreateContainer within sandbox \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\"" Jan 30 06:18:00.273610 containerd[1500]: time="2025-01-30T06:18:00.273582263Z" level=info msg="StartContainer for \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\"" Jan 30 06:18:00.299630 systemd[1]: Started sshd@11-78.47.103.36:22-183.110.116.126:35554.service - OpenSSH per-connection server daemon (183.110.116.126:35554). Jan 30 06:18:00.346303 systemd[1]: Started cri-containerd-6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4.scope - libcontainer container 6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4. Jan 30 06:18:00.414347 containerd[1500]: time="2025-01-30T06:18:00.414299085Z" level=info msg="StartContainer for \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\" returns successfully" Jan 30 06:18:01.007052 kubelet[2655]: I0130 06:18:01.006986 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b57545dfb-9fz6l" podStartSLOduration=28.374060234 podStartE2EDuration="32.006971206s" podCreationTimestamp="2025-01-30 06:17:29 +0000 UTC" firstStartedPulling="2025-01-30 06:17:56.591224965 +0000 UTC m=+38.966795701" lastFinishedPulling="2025-01-30 06:18:00.224135937 +0000 UTC m=+42.599706673" observedRunningTime="2025-01-30 06:18:01.004843207 +0000 UTC m=+43.380413953" watchObservedRunningTime="2025-01-30 06:18:01.006971206 +0000 UTC m=+43.382541952" Jan 30 06:18:01.996008 kubelet[2655]: I0130 06:18:01.995946 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:18:02.352018 containerd[1500]: time="2025-01-30T06:18:02.351959143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:18:02.353049 containerd[1500]: time="2025-01-30T06:18:02.352984976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 06:18:02.354012 containerd[1500]: time="2025-01-30T06:18:02.353967318Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:18:02.355853 containerd[1500]: time="2025-01-30T06:18:02.355799884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 06:18:02.356811 containerd[1500]: time="2025-01-30T06:18:02.356323675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.132008272s" Jan 30 06:18:02.356811 containerd[1500]: time="2025-01-30T06:18:02.356368870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 06:18:02.359589 containerd[1500]: time="2025-01-30T06:18:02.359517333Z" level=info msg="CreateContainer within sandbox \"c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 06:18:02.380331 containerd[1500]: time="2025-01-30T06:18:02.380095071Z" level=info msg="CreateContainer within sandbox \"c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2f369e669c8cb7fbf477525673dcc2a1510b6ac132b07a9d10cc8ed525ecd09d\"" Jan 30 06:18:02.382041 containerd[1500]: time="2025-01-30T06:18:02.381649685Z" level=info msg="StartContainer for \"2f369e669c8cb7fbf477525673dcc2a1510b6ac132b07a9d10cc8ed525ecd09d\"" Jan 30 06:18:02.432359 systemd[1]: Started cri-containerd-2f369e669c8cb7fbf477525673dcc2a1510b6ac132b07a9d10cc8ed525ecd09d.scope - libcontainer container 2f369e669c8cb7fbf477525673dcc2a1510b6ac132b07a9d10cc8ed525ecd09d. Jan 30 06:18:02.434955 sshd[4855]: Invalid user ubuntu from 183.110.116.126 port 35554 Jan 30 06:18:02.472713 containerd[1500]: time="2025-01-30T06:18:02.472616113Z" level=info msg="StartContainer for \"2f369e669c8cb7fbf477525673dcc2a1510b6ac132b07a9d10cc8ed525ecd09d\" returns successfully" Jan 30 06:18:02.782444 sshd[4855]: Received disconnect from 183.110.116.126 port 35554:11: Bye Bye [preauth] Jan 30 06:18:02.782444 sshd[4855]: Disconnected from invalid user ubuntu 183.110.116.126 port 35554 [preauth] Jan 30 06:18:02.785709 systemd[1]: sshd@11-78.47.103.36:22-183.110.116.126:35554.service: Deactivated successfully. Jan 30 06:18:02.913865 kubelet[2655]: I0130 06:18:02.913823 2655 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 06:18:02.920884 kubelet[2655]: I0130 06:18:02.920855 2655 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 06:18:06.484851 kubelet[2655]: I0130 06:18:06.484693 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:18:06.570845 kubelet[2655]: I0130 06:18:06.570039 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ghtnv" podStartSLOduration=30.567656143 podStartE2EDuration="37.570023812s" podCreationTimestamp="2025-01-30 06:17:29 +0000 UTC" firstStartedPulling="2025-01-30 06:17:55.355086537 +0000 UTC m=+37.730657272" lastFinishedPulling="2025-01-30 06:18:02.357454205 +0000 UTC m=+44.733024941" observedRunningTime="2025-01-30 06:18:03.011338445 +0000 UTC m=+45.386909181" watchObservedRunningTime="2025-01-30 06:18:06.570023812 +0000 UTC m=+48.945594547" Jan 30 06:18:17.738301 containerd[1500]: time="2025-01-30T06:18:17.738246122Z" level=info msg="StopPodSandbox for \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\"" Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.821 [WARNING][5000] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0", GenerateName:"calico-apiserver-85d85c8757-", Namespace:"calico-apiserver", SelfLink:"", UID:"f41a5a96-3618-4093-895b-a47df0dad582", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d85c8757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a", Pod:"calico-apiserver-85d85c8757-jvjkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif27673ac49d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.822 [INFO][5000] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.822 [INFO][5000] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" iface="eth0" netns="" Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.822 [INFO][5000] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.822 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.843 [INFO][5006] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" HandleID="k8s-pod-network.b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.843 [INFO][5006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.843 [INFO][5006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.848 [WARNING][5006] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" HandleID="k8s-pod-network.b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.848 [INFO][5006] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" HandleID="k8s-pod-network.b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.850 [INFO][5006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:17.855556 containerd[1500]: 2025-01-30 06:18:17.853 [INFO][5000] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:18:17.856000 containerd[1500]: time="2025-01-30T06:18:17.855586249Z" level=info msg="TearDown network for sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\" successfully" Jan 30 06:18:17.856000 containerd[1500]: time="2025-01-30T06:18:17.855614933Z" level=info msg="StopPodSandbox for \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\" returns successfully" Jan 30 06:18:17.856422 containerd[1500]: time="2025-01-30T06:18:17.856400135Z" level=info msg="RemovePodSandbox for \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\"" Jan 30 06:18:17.858908 containerd[1500]: time="2025-01-30T06:18:17.858876979Z" level=info msg="Forcibly stopping sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\"" Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.893 [WARNING][5024] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0", GenerateName:"calico-apiserver-85d85c8757-", Namespace:"calico-apiserver", SelfLink:"", UID:"f41a5a96-3618-4093-895b-a47df0dad582", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d85c8757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"3d5b4ec51bc3ca4db156b3215fb2d56171b6d307b5a184a61d930386e066061a", Pod:"calico-apiserver-85d85c8757-jvjkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif27673ac49d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.893 [INFO][5024] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.893 [INFO][5024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" iface="eth0" netns="" Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.893 [INFO][5024] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.893 [INFO][5024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.910 [INFO][5030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" HandleID="k8s-pod-network.b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.910 [INFO][5030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.910 [INFO][5030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.915 [WARNING][5030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" HandleID="k8s-pod-network.b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.915 [INFO][5030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" HandleID="k8s-pod-network.b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--jvjkv-eth0" Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.917 [INFO][5030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:17.921537 containerd[1500]: 2025-01-30 06:18:17.919 [INFO][5024] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92" Jan 30 06:18:17.922867 containerd[1500]: time="2025-01-30T06:18:17.921561470Z" level=info msg="TearDown network for sandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\" successfully" Jan 30 06:18:17.932029 containerd[1500]: time="2025-01-30T06:18:17.931990494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 06:18:17.932093 containerd[1500]: time="2025-01-30T06:18:17.932048633Z" level=info msg="RemovePodSandbox \"b9a8699d0e93f23baf7ea63e396cb82e8c2ff86b4872fc9fc386403a0fac6b92\" returns successfully" Jan 30 06:18:17.932523 containerd[1500]: time="2025-01-30T06:18:17.932488948Z" level=info msg="StopPodSandbox for \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\"" Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:17.984 [WARNING][5055] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"60be3982-84c5-43fa-a1af-7b15bfd904a3", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75", Pod:"csi-node-driver-ghtnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali244ead7c958", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:17.984 [INFO][5055] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:17.984 [INFO][5055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" iface="eth0" netns="" Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:17.984 [INFO][5055] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:17.984 [INFO][5055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:18.005 [INFO][5061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" HandleID="k8s-pod-network.236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:18.005 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:18.006 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:18.011 [WARNING][5061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" HandleID="k8s-pod-network.236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:18.011 [INFO][5061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" HandleID="k8s-pod-network.236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:18.012 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:18.017910 containerd[1500]: 2025-01-30 06:18:18.014 [INFO][5055] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:18:18.017910 containerd[1500]: time="2025-01-30T06:18:18.016553104Z" level=info msg="TearDown network for sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\" successfully" Jan 30 06:18:18.017910 containerd[1500]: time="2025-01-30T06:18:18.016577670Z" level=info msg="StopPodSandbox for \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\" returns successfully" Jan 30 06:18:18.017910 containerd[1500]: time="2025-01-30T06:18:18.017423857Z" level=info msg="RemovePodSandbox for \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\"" Jan 30 06:18:18.017910 containerd[1500]: time="2025-01-30T06:18:18.017455527Z" level=info msg="Forcibly stopping sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\"" Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.058 [WARNING][5079] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"60be3982-84c5-43fa-a1af-7b15bfd904a3", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"c1b2aec0d1772aa52ead700a9eec6acd28500e9276a04fd574319c6c7034ea75", Pod:"csi-node-driver-ghtnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali244ead7c958", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.058 [INFO][5079] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.058 [INFO][5079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" iface="eth0" netns="" Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.058 [INFO][5079] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.058 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.085 [INFO][5085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" HandleID="k8s-pod-network.236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.085 [INFO][5085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.085 [INFO][5085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.091 [WARNING][5085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" HandleID="k8s-pod-network.236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.091 [INFO][5085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" HandleID="k8s-pod-network.236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-csi--node--driver--ghtnv-eth0" Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.093 [INFO][5085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:18.098136 containerd[1500]: 2025-01-30 06:18:18.095 [INFO][5079] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2" Jan 30 06:18:18.099816 containerd[1500]: time="2025-01-30T06:18:18.098103990Z" level=info msg="TearDown network for sandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\" successfully" Jan 30 06:18:18.103519 containerd[1500]: time="2025-01-30T06:18:18.103377668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 06:18:18.103519 containerd[1500]: time="2025-01-30T06:18:18.103438812Z" level=info msg="RemovePodSandbox \"236a935ed9d676a907a92a7612c5d7ab68595db53eb62c55a49e529307f913a2\" returns successfully" Jan 30 06:18:18.103897 containerd[1500]: time="2025-01-30T06:18:18.103871413Z" level=info msg="StopPodSandbox for \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\"" Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.133 [WARNING][5104] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e4a0874f-e29f-4582-aee0-6703edd12dfa", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0", Pod:"coredns-668d6bf9bc-pvvt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dd2d35c3c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.133 [INFO][5104] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.133 [INFO][5104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" iface="eth0" netns="" Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.133 [INFO][5104] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.133 [INFO][5104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.149 [INFO][5110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" HandleID="k8s-pod-network.8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.150 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.150 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.155 [WARNING][5110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" HandleID="k8s-pod-network.8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.155 [INFO][5110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" HandleID="k8s-pod-network.8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.156 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:18.160676 containerd[1500]: 2025-01-30 06:18:18.158 [INFO][5104] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:18:18.162243 containerd[1500]: time="2025-01-30T06:18:18.161077423Z" level=info msg="TearDown network for sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\" successfully" Jan 30 06:18:18.162243 containerd[1500]: time="2025-01-30T06:18:18.161103852Z" level=info msg="StopPodSandbox for \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\" returns successfully" Jan 30 06:18:18.162243 containerd[1500]: time="2025-01-30T06:18:18.161557623Z" level=info msg="RemovePodSandbox for \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\"" Jan 30 06:18:18.162243 containerd[1500]: time="2025-01-30T06:18:18.161581248Z" level=info msg="Forcibly stopping sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\"" Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.201 [WARNING][5128] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e4a0874f-e29f-4582-aee0-6703edd12dfa", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"d6e870a79f708d94ef2bb386409a881e81194bb58d6f832e58fe3f4baca684f0", Pod:"coredns-668d6bf9bc-pvvt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dd2d35c3c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.201 [INFO][5128] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.201 [INFO][5128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" iface="eth0" netns="" Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.201 [INFO][5128] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.201 [INFO][5128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.218 [INFO][5134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" HandleID="k8s-pod-network.8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.218 [INFO][5134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.219 [INFO][5134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.223 [WARNING][5134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" HandleID="k8s-pod-network.8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.223 [INFO][5134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" HandleID="k8s-pod-network.8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--pvvt2-eth0" Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.225 [INFO][5134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:18.229273 containerd[1500]: 2025-01-30 06:18:18.227 [INFO][5128] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36" Jan 30 06:18:18.230409 containerd[1500]: time="2025-01-30T06:18:18.229285841Z" level=info msg="TearDown network for sandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\" successfully" Jan 30 06:18:18.232513 containerd[1500]: time="2025-01-30T06:18:18.232482394Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 06:18:18.232585 containerd[1500]: time="2025-01-30T06:18:18.232535163Z" level=info msg="RemovePodSandbox \"8df49f7f2379a59f863e9b487ac100440e4d0681bcbad3fc87b657d2002b9c36\" returns successfully" Jan 30 06:18:18.233052 containerd[1500]: time="2025-01-30T06:18:18.233025914Z" level=info msg="StopPodSandbox for \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\"" Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.261 [WARNING][5153] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0", GenerateName:"calico-kube-controllers-6b57545dfb-", Namespace:"calico-system", SelfLink:"", UID:"41640a56-2c47-4ec4-99af-6f90c868637c", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b57545dfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3", Pod:"calico-kube-controllers-6b57545dfb-9fz6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20eec47e75b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.261 [INFO][5153] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.261 [INFO][5153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" iface="eth0" netns="" Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.261 [INFO][5153] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.262 [INFO][5153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.279 [INFO][5159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" HandleID="k8s-pod-network.5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.279 [INFO][5159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.279 [INFO][5159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.284 [WARNING][5159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" HandleID="k8s-pod-network.5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.284 [INFO][5159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" HandleID="k8s-pod-network.5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.285 [INFO][5159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:18.289222 containerd[1500]: 2025-01-30 06:18:18.287 [INFO][5153] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:18:18.289222 containerd[1500]: time="2025-01-30T06:18:18.289172156Z" level=info msg="TearDown network for sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\" successfully" Jan 30 06:18:18.289222 containerd[1500]: time="2025-01-30T06:18:18.289195890Z" level=info msg="StopPodSandbox for \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\" returns successfully" Jan 30 06:18:18.289701 containerd[1500]: time="2025-01-30T06:18:18.289640163Z" level=info msg="RemovePodSandbox for \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\"" Jan 30 06:18:18.289701 containerd[1500]: time="2025-01-30T06:18:18.289662525Z" level=info msg="Forcibly stopping sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\"" Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.318 [WARNING][5177] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0", GenerateName:"calico-kube-controllers-6b57545dfb-", Namespace:"calico-system", SelfLink:"", UID:"41640a56-2c47-4ec4-99af-6f90c868637c", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b57545dfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3", Pod:"calico-kube-controllers-6b57545dfb-9fz6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20eec47e75b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.319 [INFO][5177] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.319 [INFO][5177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" iface="eth0" netns="" Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.319 [INFO][5177] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.319 [INFO][5177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.336 [INFO][5184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" HandleID="k8s-pod-network.5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.336 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.336 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.341 [WARNING][5184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" HandleID="k8s-pod-network.5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.341 [INFO][5184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" HandleID="k8s-pod-network.5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.343 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:18.346850 containerd[1500]: 2025-01-30 06:18:18.344 [INFO][5177] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2" Jan 30 06:18:18.347788 containerd[1500]: time="2025-01-30T06:18:18.346852885Z" level=info msg="TearDown network for sandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\" successfully" Jan 30 06:18:18.353981 containerd[1500]: time="2025-01-30T06:18:18.353932099Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 06:18:18.364893 containerd[1500]: time="2025-01-30T06:18:18.364864166Z" level=info msg="RemovePodSandbox \"5c3dea86dd29facbde84829d911bc599a58cc01d35159f73ccfe6120394d6cd2\" returns successfully" Jan 30 06:18:18.365358 containerd[1500]: time="2025-01-30T06:18:18.365333655Z" level=info msg="StopPodSandbox for \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\"" Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.396 [WARNING][5202] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0", GenerateName:"calico-apiserver-85d85c8757-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf422376-3d63-4f20-9072-0d2f8b49abb2", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d85c8757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e", Pod:"calico-apiserver-85d85c8757-nkrq9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali812f35c6775", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.396 [INFO][5202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.396 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" iface="eth0" netns="" Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.397 [INFO][5202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.397 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.418 [INFO][5209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" HandleID="k8s-pod-network.4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.418 [INFO][5209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.418 [INFO][5209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.423 [WARNING][5209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" HandleID="k8s-pod-network.4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.423 [INFO][5209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" HandleID="k8s-pod-network.4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.424 [INFO][5209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:18.427850 containerd[1500]: 2025-01-30 06:18:18.425 [INFO][5202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:18:18.428861 containerd[1500]: time="2025-01-30T06:18:18.427886099Z" level=info msg="TearDown network for sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\" successfully" Jan 30 06:18:18.428861 containerd[1500]: time="2025-01-30T06:18:18.427909163Z" level=info msg="StopPodSandbox for \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\" returns successfully" Jan 30 06:18:18.428861 containerd[1500]: time="2025-01-30T06:18:18.428305907Z" level=info msg="RemovePodSandbox for \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\"" Jan 30 06:18:18.428861 containerd[1500]: time="2025-01-30T06:18:18.428332406Z" level=info msg="Forcibly stopping sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\"" Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.459 [WARNING][5227] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0", GenerateName:"calico-apiserver-85d85c8757-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf422376-3d63-4f20-9072-0d2f8b49abb2", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d85c8757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"100fdddd612ee1419d81fa162decbc8277ca49151a1a551496f279105aad4e3e", Pod:"calico-apiserver-85d85c8757-nkrq9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali812f35c6775", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.459 [INFO][5227] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.459 [INFO][5227] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" iface="eth0" netns="" Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.459 [INFO][5227] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.459 [INFO][5227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.477 [INFO][5233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" HandleID="k8s-pod-network.4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.477 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.477 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.481 [WARNING][5233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" HandleID="k8s-pod-network.4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.481 [INFO][5233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" HandleID="k8s-pod-network.4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--apiserver--85d85c8757--nkrq9-eth0" Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.482 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:18.486782 containerd[1500]: 2025-01-30 06:18:18.485 [INFO][5227] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8" Jan 30 06:18:18.487207 containerd[1500]: time="2025-01-30T06:18:18.486816271Z" level=info msg="TearDown network for sandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\" successfully" Jan 30 06:18:18.490077 containerd[1500]: time="2025-01-30T06:18:18.490044695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 06:18:18.490183 containerd[1500]: time="2025-01-30T06:18:18.490093516Z" level=info msg="RemovePodSandbox \"4a2217dcfab107905535e49830275d2f77cdd601ceda363ff7012afdf6541ed8\" returns successfully" Jan 30 06:18:18.490683 containerd[1500]: time="2025-01-30T06:18:18.490644339Z" level=info msg="StopPodSandbox for \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\"" Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.520 [WARNING][5251] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8e994494-74b7-4b5f-9da4-52083e102b26", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10", Pod:"coredns-668d6bf9bc-twvvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01f3e4ee1d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.520 [INFO][5251] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.520 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" iface="eth0" netns="" Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.520 [INFO][5251] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.520 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.537 [INFO][5257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" HandleID="k8s-pod-network.fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.537 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.537 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.542 [WARNING][5257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" HandleID="k8s-pod-network.fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.542 [INFO][5257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" HandleID="k8s-pod-network.fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.544 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:18.547765 containerd[1500]: 2025-01-30 06:18:18.545 [INFO][5251] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:18:18.547765 containerd[1500]: time="2025-01-30T06:18:18.547632730Z" level=info msg="TearDown network for sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\" successfully" Jan 30 06:18:18.547765 containerd[1500]: time="2025-01-30T06:18:18.547656134Z" level=info msg="StopPodSandbox for \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\" returns successfully" Jan 30 06:18:18.549190 containerd[1500]: time="2025-01-30T06:18:18.548812422Z" level=info msg="RemovePodSandbox for \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\"" Jan 30 06:18:18.549190 containerd[1500]: time="2025-01-30T06:18:18.548905617Z" level=info msg="Forcibly stopping sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\"" Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.578 [WARNING][5276] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8e994494-74b7-4b5f-9da4-52083e102b26", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"430ea1b96f7ac9a3f136115554a18149a5e770a796a0203e834fabd3bba18c10", Pod:"coredns-668d6bf9bc-twvvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01f3e4ee1d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.579 [INFO][5276] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.579 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" iface="eth0" netns="" Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.579 [INFO][5276] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.579 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.596 [INFO][5282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" HandleID="k8s-pod-network.fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.596 [INFO][5282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.596 [INFO][5282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.601 [WARNING][5282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" HandleID="k8s-pod-network.fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.601 [INFO][5282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" HandleID="k8s-pod-network.fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-coredns--668d6bf9bc--twvvf-eth0" Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.602 [INFO][5282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:18.606282 containerd[1500]: 2025-01-30 06:18:18.604 [INFO][5276] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a" Jan 30 06:18:18.606651 containerd[1500]: time="2025-01-30T06:18:18.606299528Z" level=info msg="TearDown network for sandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\" successfully" Jan 30 06:18:18.609499 containerd[1500]: time="2025-01-30T06:18:18.609462699Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 06:18:18.609499 containerd[1500]: time="2025-01-30T06:18:18.609512322Z" level=info msg="RemovePodSandbox \"fc4a212b0b2673efb8dc84893a97b6495b4d295505654edc5ca52ba51fd93b1a\" returns successfully" Jan 30 06:18:25.380691 containerd[1500]: time="2025-01-30T06:18:25.379083117Z" level=info msg="StopContainer for \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\" with timeout 300 (s)" Jan 30 06:18:25.384177 containerd[1500]: time="2025-01-30T06:18:25.384027748Z" level=info msg="Stop container \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\" with signal terminated" Jan 30 06:18:25.499477 containerd[1500]: time="2025-01-30T06:18:25.499428549Z" level=info msg="StopContainer for \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\" with timeout 30 (s)" Jan 30 06:18:25.500477 containerd[1500]: time="2025-01-30T06:18:25.500425959Z" level=info msg="Stop container \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\" with signal terminated" Jan 30 06:18:25.522238 systemd[1]: cri-containerd-6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4.scope: Deactivated successfully. Jan 30 06:18:25.560656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4-rootfs.mount: Deactivated successfully. Jan 30 06:18:25.575601 systemd[1]: sshd@0-78.47.103.36:22-125.74.237.67:51826.service: Deactivated successfully. Jan 30 06:18:25.596884 containerd[1500]: time="2025-01-30T06:18:25.559641289Z" level=info msg="shim disconnected" id=6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4 namespace=k8s.io Jan 30 06:18:25.604463 containerd[1500]: time="2025-01-30T06:18:25.604421414Z" level=warning msg="cleaning up after shim disconnected" id=6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4 namespace=k8s.io Jan 30 06:18:25.604463 containerd[1500]: time="2025-01-30T06:18:25.604453484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:18:25.647199 containerd[1500]: time="2025-01-30T06:18:25.646934176Z" level=info msg="StopContainer for \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\" returns successfully" Jan 30 06:18:25.655517 containerd[1500]: time="2025-01-30T06:18:25.655382096Z" level=info msg="StopPodSandbox for \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\"" Jan 30 06:18:25.664206 containerd[1500]: time="2025-01-30T06:18:25.664171637Z" level=info msg="Container to stop \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 06:18:25.672742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3-shm.mount: Deactivated successfully. Jan 30 06:18:25.681048 containerd[1500]: time="2025-01-30T06:18:25.681004339Z" level=info msg="StopContainer for \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\" with timeout 5 (s)" Jan 30 06:18:25.681941 containerd[1500]: time="2025-01-30T06:18:25.681746210Z" level=info msg="Stop container \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\" with signal terminated" Jan 30 06:18:25.689311 systemd[1]: cri-containerd-f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3.scope: Deactivated successfully. Jan 30 06:18:25.730016 systemd[1]: cri-containerd-6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151.scope: Deactivated successfully. Jan 30 06:18:25.730944 systemd[1]: cri-containerd-6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151.scope: Consumed 2.070s CPU time. Jan 30 06:18:25.737287 containerd[1500]: time="2025-01-30T06:18:25.735528012Z" level=info msg="shim disconnected" id=f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3 namespace=k8s.io Jan 30 06:18:25.737287 containerd[1500]: time="2025-01-30T06:18:25.735696980Z" level=warning msg="cleaning up after shim disconnected" id=f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3 namespace=k8s.io Jan 30 06:18:25.737287 containerd[1500]: time="2025-01-30T06:18:25.735708511Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:18:25.741638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3-rootfs.mount: Deactivated successfully. Jan 30 06:18:25.790926 containerd[1500]: time="2025-01-30T06:18:25.790371967Z" level=info msg="shim disconnected" id=6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151 namespace=k8s.io Jan 30 06:18:25.790926 containerd[1500]: time="2025-01-30T06:18:25.790417252Z" level=warning msg="cleaning up after shim disconnected" id=6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151 namespace=k8s.io Jan 30 06:18:25.790926 containerd[1500]: time="2025-01-30T06:18:25.790425518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:18:25.792422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151-rootfs.mount: Deactivated successfully. Jan 30 06:18:25.811881 containerd[1500]: time="2025-01-30T06:18:25.811780249Z" level=warning msg="cleanup warnings time=\"2025-01-30T06:18:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 06:18:25.823737 containerd[1500]: time="2025-01-30T06:18:25.823510413Z" level=info msg="StopContainer for \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\" returns successfully" Jan 30 06:18:25.824697 containerd[1500]: time="2025-01-30T06:18:25.824499658Z" level=info msg="StopPodSandbox for \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\"" Jan 30 06:18:25.824697 containerd[1500]: time="2025-01-30T06:18:25.824532810Z" level=info msg="Container to stop \"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 06:18:25.824697 containerd[1500]: time="2025-01-30T06:18:25.824543961Z" level=info msg="Container to stop \"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 06:18:25.824697 containerd[1500]: time="2025-01-30T06:18:25.824554100Z" level=info msg="Container to stop \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 06:18:25.835529 systemd[1]: cri-containerd-59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8.scope: Deactivated successfully. Jan 30 06:18:25.867073 containerd[1500]: time="2025-01-30T06:18:25.866999657Z" level=info msg="shim disconnected" id=59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8 namespace=k8s.io Jan 30 06:18:25.867304 containerd[1500]: time="2025-01-30T06:18:25.867185314Z" level=warning msg="cleaning up after shim disconnected" id=59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8 namespace=k8s.io Jan 30 06:18:25.867304 containerd[1500]: time="2025-01-30T06:18:25.867200382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:18:25.872422 systemd-networkd[1401]: cali20eec47e75b: Link DOWN Jan 30 06:18:25.872428 systemd-networkd[1401]: cali20eec47e75b: Lost carrier Jan 30 06:18:25.907987 containerd[1500]: time="2025-01-30T06:18:25.907703077Z" level=info msg="TearDown network for sandbox \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" successfully" Jan 30 06:18:25.907987 containerd[1500]: time="2025-01-30T06:18:25.907732221Z" level=info msg="StopPodSandbox for \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" returns successfully" Jan 30 06:18:25.961944 kubelet[2655]: I0130 06:18:25.961885 2655 memory_manager.go:355] "RemoveStaleState removing state" podUID="54b87580-46dc-4595-a2b2-8b2f0959f962" containerName="calico-node" Jan 30 06:18:25.981161 systemd[1]: Created slice kubepods-besteffort-pod5c31c477_9caa_4801_a5fa_5df97ae65b67.slice - libcontainer container kubepods-besteffort-pod5c31c477_9caa_4801_a5fa_5df97ae65b67.slice. Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.870 [INFO][5445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.870 [INFO][5445] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" iface="eth0" netns="/var/run/netns/cni-d7c1865e-9091-01b6-0ccd-0e448d800fc9" Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.871 [INFO][5445] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" iface="eth0" netns="/var/run/netns/cni-d7c1865e-9091-01b6-0ccd-0e448d800fc9" Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.881 [INFO][5445] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" after=10.586659ms iface="eth0" netns="/var/run/netns/cni-d7c1865e-9091-01b6-0ccd-0e448d800fc9" Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.881 [INFO][5445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.881 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.926 [INFO][5484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.926 [INFO][5484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.926 [INFO][5484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.997 [INFO][5484] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.997 [INFO][5484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:25.999 [INFO][5484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:26.005492 containerd[1500]: 2025-01-30 06:18:26.003 [INFO][5445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:18:26.006641 containerd[1500]: time="2025-01-30T06:18:26.005746853Z" level=info msg="TearDown network for sandbox \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\" successfully" Jan 30 06:18:26.006641 containerd[1500]: time="2025-01-30T06:18:26.005776508Z" level=info msg="StopPodSandbox for \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\" returns successfully" Jan 30 06:18:26.070960 kubelet[2655]: I0130 06:18:26.070452 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-net-dir\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.070960 kubelet[2655]: I0130 06:18:26.070501 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-log-dir\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.070960 kubelet[2655]: I0130 06:18:26.070532 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b87580-46dc-4595-a2b2-8b2f0959f962-tigera-ca-bundle\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.070960 kubelet[2655]: I0130 06:18:26.070550 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-var-run-calico\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.070960 kubelet[2655]: I0130 06:18:26.070563 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-policysync\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.070960 kubelet[2655]: I0130 06:18:26.070575 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-var-lib-calico\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.071278 kubelet[2655]: I0130 06:18:26.070591 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-lib-modules\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.071278 kubelet[2655]: I0130 06:18:26.070603 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-xtables-lock\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.071278 kubelet[2655]: I0130 06:18:26.070617 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-bin-dir\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.071278 kubelet[2655]: I0130 06:18:26.070632 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-flexvol-driver-host\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.071278 kubelet[2655]: I0130 06:18:26.070649 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmqc2\" (UniqueName: \"kubernetes.io/projected/54b87580-46dc-4595-a2b2-8b2f0959f962-kube-api-access-wmqc2\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.071278 kubelet[2655]: I0130 06:18:26.070666 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/54b87580-46dc-4595-a2b2-8b2f0959f962-node-certs\") pod \"54b87580-46dc-4595-a2b2-8b2f0959f962\" (UID: \"54b87580-46dc-4595-a2b2-8b2f0959f962\") " Jan 30 06:18:26.071392 kubelet[2655]: I0130 06:18:26.070712 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5c31c477-9caa-4801-a5fa-5df97ae65b67-node-certs\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.071392 kubelet[2655]: I0130 06:18:26.070731 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5c31c477-9caa-4801-a5fa-5df97ae65b67-policysync\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.071392 kubelet[2655]: I0130 06:18:26.070753 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5c31c477-9caa-4801-a5fa-5df97ae65b67-var-run-calico\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.071392 kubelet[2655]: I0130 06:18:26.070767 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5c31c477-9caa-4801-a5fa-5df97ae65b67-cni-bin-dir\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.071392 kubelet[2655]: I0130 06:18:26.070783 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c31c477-9caa-4801-a5fa-5df97ae65b67-lib-modules\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.071495 kubelet[2655]: I0130 06:18:26.070799 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5c31c477-9caa-4801-a5fa-5df97ae65b67-var-lib-calico\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.071495 kubelet[2655]: I0130 06:18:26.070814 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5c31c477-9caa-4801-a5fa-5df97ae65b67-cni-log-dir\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.071495 kubelet[2655]: I0130 06:18:26.070831 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c31c477-9caa-4801-a5fa-5df97ae65b67-xtables-lock\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.071495 kubelet[2655]: I0130 06:18:26.070844 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c31c477-9caa-4801-a5fa-5df97ae65b67-tigera-ca-bundle\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.071495 kubelet[2655]: I0130 06:18:26.070856 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77hs6\" (UniqueName: \"kubernetes.io/projected/5c31c477-9caa-4801-a5fa-5df97ae65b67-kube-api-access-77hs6\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.071588 kubelet[2655]: I0130 06:18:26.070870 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5c31c477-9caa-4801-a5fa-5df97ae65b67-cni-net-dir\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.071588 kubelet[2655]: I0130 06:18:26.070882 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5c31c477-9caa-4801-a5fa-5df97ae65b67-flexvol-driver-host\") pod \"calico-node-x9tdw\" (UID: \"5c31c477-9caa-4801-a5fa-5df97ae65b67\") " pod="calico-system/calico-node-x9tdw" Jan 30 06:18:26.072872 kubelet[2655]: I0130 06:18:26.069954 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 06:18:26.072872 kubelet[2655]: I0130 06:18:26.070944 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 06:18:26.072872 kubelet[2655]: I0130 06:18:26.072747 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 06:18:26.072872 kubelet[2655]: I0130 06:18:26.072766 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 06:18:26.072872 kubelet[2655]: I0130 06:18:26.072782 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-policysync" (OuterVolumeSpecName: "policysync") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 06:18:26.073032 kubelet[2655]: I0130 06:18:26.072799 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 06:18:26.073032 kubelet[2655]: I0130 06:18:26.072814 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 06:18:26.073032 kubelet[2655]: I0130 06:18:26.072829 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 06:18:26.073032 kubelet[2655]: I0130 06:18:26.072844 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 06:18:26.084460 kubelet[2655]: I0130 06:18:26.084421 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b87580-46dc-4595-a2b2-8b2f0959f962-kube-api-access-wmqc2" (OuterVolumeSpecName: "kube-api-access-wmqc2") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "kube-api-access-wmqc2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 06:18:26.086619 kubelet[2655]: I0130 06:18:26.086410 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b87580-46dc-4595-a2b2-8b2f0959f962-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 06:18:26.107680 kubelet[2655]: I0130 06:18:26.107181 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b87580-46dc-4595-a2b2-8b2f0959f962-node-certs" (OuterVolumeSpecName: "node-certs") pod "54b87580-46dc-4595-a2b2-8b2f0959f962" (UID: "54b87580-46dc-4595-a2b2-8b2f0959f962"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 06:18:26.123391 kubelet[2655]: I0130 06:18:26.123175 2655 scope.go:117] "RemoveContainer" containerID="6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151" Jan 30 06:18:26.147033 systemd[1]: Removed slice kubepods-besteffort-pod54b87580_46dc_4595_a2b2_8b2f0959f962.slice - libcontainer container kubepods-besteffort-pod54b87580_46dc_4595_a2b2_8b2f0959f962.slice. Jan 30 06:18:26.147356 systemd[1]: kubepods-besteffort-pod54b87580_46dc_4595_a2b2_8b2f0959f962.slice: Consumed 2.553s CPU time. Jan 30 06:18:26.156929 containerd[1500]: time="2025-01-30T06:18:26.156701969Z" level=info msg="RemoveContainer for \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\"" Jan 30 06:18:26.161616 containerd[1500]: time="2025-01-30T06:18:26.161511457Z" level=info msg="RemoveContainer for \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\" returns successfully" Jan 30 06:18:26.166366 kubelet[2655]: I0130 06:18:26.166330 2655 scope.go:117] "RemoveContainer" containerID="15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15" Jan 30 06:18:26.168354 containerd[1500]: time="2025-01-30T06:18:26.168333218Z" level=info msg="RemoveContainer for \"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15\"" Jan 30 06:18:26.171508 kubelet[2655]: I0130 06:18:26.171422 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41640a56-2c47-4ec4-99af-6f90c868637c-tigera-ca-bundle\") pod \"41640a56-2c47-4ec4-99af-6f90c868637c\" (UID: \"41640a56-2c47-4ec4-99af-6f90c868637c\") " Jan 30 06:18:26.171508 kubelet[2655]: I0130 06:18:26.171467 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d7hb\" (UniqueName: \"kubernetes.io/projected/41640a56-2c47-4ec4-99af-6f90c868637c-kube-api-access-2d7hb\") pod \"41640a56-2c47-4ec4-99af-6f90c868637c\" (UID: \"41640a56-2c47-4ec4-99af-6f90c868637c\") " Jan 30 06:18:26.171664 kubelet[2655]: I0130 06:18:26.171643 2655 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-lib-modules\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.171664 kubelet[2655]: I0130 06:18:26.171664 2655 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-xtables-lock\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.171841 kubelet[2655]: I0130 06:18:26.171672 2655 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-bin-dir\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.171841 kubelet[2655]: I0130 06:18:26.171681 2655 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-net-dir\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.171841 kubelet[2655]: I0130 06:18:26.171688 2655 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b87580-46dc-4595-a2b2-8b2f0959f962-tigera-ca-bundle\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.171841 kubelet[2655]: I0130 06:18:26.171697 2655 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wmqc2\" (UniqueName: \"kubernetes.io/projected/54b87580-46dc-4595-a2b2-8b2f0959f962-kube-api-access-wmqc2\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.171841 kubelet[2655]: I0130 06:18:26.171705 2655 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-flexvol-driver-host\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.171841 kubelet[2655]: I0130 06:18:26.171713 2655 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/54b87580-46dc-4595-a2b2-8b2f0959f962-node-certs\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.171841 kubelet[2655]: I0130 06:18:26.171721 2655 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-cni-log-dir\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.171841 kubelet[2655]: I0130 06:18:26.171729 2655 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-var-run-calico\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.172036 kubelet[2655]: I0130 06:18:26.171737 2655 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-policysync\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.172036 kubelet[2655]: I0130 06:18:26.171744 2655 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54b87580-46dc-4595-a2b2-8b2f0959f962-var-lib-calico\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.173017 containerd[1500]: time="2025-01-30T06:18:26.172936621Z" level=info msg="RemoveContainer for \"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15\" returns successfully" Jan 30 06:18:26.192528 kubelet[2655]: I0130 06:18:26.192097 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41640a56-2c47-4ec4-99af-6f90c868637c-kube-api-access-2d7hb" (OuterVolumeSpecName: "kube-api-access-2d7hb") pod "41640a56-2c47-4ec4-99af-6f90c868637c" (UID: "41640a56-2c47-4ec4-99af-6f90c868637c"). InnerVolumeSpecName "kube-api-access-2d7hb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 06:18:26.193298 kubelet[2655]: I0130 06:18:26.193267 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41640a56-2c47-4ec4-99af-6f90c868637c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "41640a56-2c47-4ec4-99af-6f90c868637c" (UID: "41640a56-2c47-4ec4-99af-6f90c868637c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 06:18:26.198345 kubelet[2655]: I0130 06:18:26.193832 2655 scope.go:117] "RemoveContainer" containerID="a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a" Jan 30 06:18:26.200067 containerd[1500]: time="2025-01-30T06:18:26.199906169Z" level=info msg="RemoveContainer for \"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a\"" Jan 30 06:18:26.204300 containerd[1500]: time="2025-01-30T06:18:26.204250095Z" level=info msg="RemoveContainer for \"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a\" returns successfully" Jan 30 06:18:26.204655 kubelet[2655]: I0130 06:18:26.204507 2655 scope.go:117] "RemoveContainer" containerID="6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151" Jan 30 06:18:26.220421 containerd[1500]: time="2025-01-30T06:18:26.209182693Z" level=error msg="ContainerStatus for \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\": not found" Jan 30 06:18:26.230229 kubelet[2655]: E0130 06:18:26.230181 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\": not found" containerID="6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151" Jan 30 06:18:26.230475 kubelet[2655]: I0130 06:18:26.230305 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151"} err="failed to get container status \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b788e833df2dec5e545c573142fa15dff7e14abb4841b0bce2ac6dfbf66b151\": not found" Jan 30 06:18:26.230475 kubelet[2655]: I0130 06:18:26.230343 2655 scope.go:117] "RemoveContainer" containerID="15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15" Jan 30 06:18:26.230645 containerd[1500]: time="2025-01-30T06:18:26.230602486Z" level=error msg="ContainerStatus for \"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15\": not found" Jan 30 06:18:26.230801 kubelet[2655]: E0130 06:18:26.230724 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15\": not found" containerID="15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15" Jan 30 06:18:26.230801 kubelet[2655]: I0130 06:18:26.230751 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15"} err="failed to get container status \"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15\": rpc error: code = NotFound desc = an error occurred when try to find container \"15101bc637fc86fe6a422df05a460d0286c1109c277e5d39fcb3f284244adf15\": not found" Jan 30 06:18:26.230801 kubelet[2655]: I0130 06:18:26.230769 2655 scope.go:117] "RemoveContainer" containerID="a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a" Jan 30 06:18:26.231222 containerd[1500]: time="2025-01-30T06:18:26.231161124Z" level=error msg="ContainerStatus for \"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a\": not found" Jan 30 06:18:26.231417 kubelet[2655]: E0130 06:18:26.231270 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a\": not found" containerID="a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a" Jan 30 06:18:26.231417 kubelet[2655]: I0130 06:18:26.231294 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a"} err="failed to get container status \"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2f7cec33f470317f8e8d08c9ce7764ddd8fb03cc5581e7b3bde11e35960a15a\": not found" Jan 30 06:18:26.231417 kubelet[2655]: I0130 06:18:26.231307 2655 scope.go:117] "RemoveContainer" containerID="6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4" Jan 30 06:18:26.233298 containerd[1500]: time="2025-01-30T06:18:26.233259599Z" level=info msg="RemoveContainer for \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\"" Jan 30 06:18:26.237739 containerd[1500]: time="2025-01-30T06:18:26.237646654Z" level=info msg="RemoveContainer for \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\" returns successfully" Jan 30 06:18:26.237983 kubelet[2655]: I0130 06:18:26.237917 2655 scope.go:117] "RemoveContainer" containerID="6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4" Jan 30 06:18:26.238436 containerd[1500]: time="2025-01-30T06:18:26.238289610Z" level=error msg="ContainerStatus for \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\": not found" Jan 30 06:18:26.238484 kubelet[2655]: E0130 06:18:26.238398 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\": not found" containerID="6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4" Jan 30 06:18:26.238576 kubelet[2655]: I0130 06:18:26.238548 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4"} err="failed to get container status \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c9fe30d755d435f64e168f87c98796aca6d52c4c4d876bc146c115fc02438d4\": not found" Jan 30 06:18:26.272866 kubelet[2655]: I0130 06:18:26.272803 2655 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41640a56-2c47-4ec4-99af-6f90c868637c-tigera-ca-bundle\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.272866 kubelet[2655]: I0130 06:18:26.272829 2655 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2d7hb\" (UniqueName: \"kubernetes.io/projected/41640a56-2c47-4ec4-99af-6f90c868637c-kube-api-access-2d7hb\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.291203 containerd[1500]: time="2025-01-30T06:18:26.291146388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x9tdw,Uid:5c31c477-9caa-4801-a5fa-5df97ae65b67,Namespace:calico-system,Attempt:0,}" Jan 30 06:18:26.356061 containerd[1500]: time="2025-01-30T06:18:26.355823140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:18:26.356061 containerd[1500]: time="2025-01-30T06:18:26.355875158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:18:26.356061 containerd[1500]: time="2025-01-30T06:18:26.355885868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:18:26.356061 containerd[1500]: time="2025-01-30T06:18:26.355971167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:18:26.378124 systemd[1]: Started cri-containerd-8f092c5cf9ebbd40b0613388cfb16549e51b5f761ea329c06b12e76fcb3ecfe6.scope - libcontainer container 8f092c5cf9ebbd40b0613388cfb16549e51b5f761ea329c06b12e76fcb3ecfe6. Jan 30 06:18:26.423237 systemd[1]: cri-containerd-781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee.scope: Deactivated successfully. Jan 30 06:18:26.445483 systemd[1]: Removed slice kubepods-besteffort-pod41640a56_2c47_4ec4_99af_6f90c868637c.slice - libcontainer container kubepods-besteffort-pod41640a56_2c47_4ec4_99af_6f90c868637c.slice. Jan 30 06:18:26.459207 containerd[1500]: time="2025-01-30T06:18:26.458023720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x9tdw,Uid:5c31c477-9caa-4801-a5fa-5df97ae65b67,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f092c5cf9ebbd40b0613388cfb16549e51b5f761ea329c06b12e76fcb3ecfe6\"" Jan 30 06:18:26.472004 systemd[1]: var-lib-kubelet-pods-41640a56\x2d2c47\x2d4ec4\x2d99af\x2d6f90c868637c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 30 06:18:26.473085 systemd[1]: run-netns-cni\x2dd7c1865e\x2d9091\x2d01b6\x2d0ccd\x2d0e448d800fc9.mount: Deactivated successfully. Jan 30 06:18:26.473191 systemd[1]: var-lib-kubelet-pods-54b87580\x2d46dc\x2d4595\x2da2b2\x2d8b2f0959f962-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 30 06:18:26.473260 systemd[1]: var-lib-kubelet-pods-41640a56\x2d2c47\x2d4ec4\x2d99af\x2d6f90c868637c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2d7hb.mount: Deactivated successfully. Jan 30 06:18:26.473329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8-rootfs.mount: Deactivated successfully. Jan 30 06:18:26.473396 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8-shm.mount: Deactivated successfully. Jan 30 06:18:26.473467 systemd[1]: var-lib-kubelet-pods-54b87580\x2d46dc\x2d4595\x2da2b2\x2d8b2f0959f962-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwmqc2.mount: Deactivated successfully. Jan 30 06:18:26.473823 systemd[1]: var-lib-kubelet-pods-54b87580\x2d46dc\x2d4595\x2da2b2\x2d8b2f0959f962-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 30 06:18:26.494644 containerd[1500]: time="2025-01-30T06:18:26.494401039Z" level=info msg="CreateContainer within sandbox \"8f092c5cf9ebbd40b0613388cfb16549e51b5f761ea329c06b12e76fcb3ecfe6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 06:18:26.496295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee-rootfs.mount: Deactivated successfully. Jan 30 06:18:26.502200 containerd[1500]: time="2025-01-30T06:18:26.502031608Z" level=info msg="shim disconnected" id=781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee namespace=k8s.io Jan 30 06:18:26.502200 containerd[1500]: time="2025-01-30T06:18:26.502072944Z" level=warning msg="cleaning up after shim disconnected" id=781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee namespace=k8s.io Jan 30 06:18:26.502200 containerd[1500]: time="2025-01-30T06:18:26.502080849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:18:26.585064 containerd[1500]: time="2025-01-30T06:18:26.584987153Z" level=info msg="StopContainer for \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\" returns successfully" Jan 30 06:18:26.586032 containerd[1500]: time="2025-01-30T06:18:26.585180485Z" level=info msg="CreateContainer within sandbox \"8f092c5cf9ebbd40b0613388cfb16549e51b5f761ea329c06b12e76fcb3ecfe6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6f853f83bc650f285549376f7b55c012b4ea84ca447b69c467cacfc26379499e\"" Jan 30 06:18:26.588914 containerd[1500]: time="2025-01-30T06:18:26.588868591Z" level=info msg="StartContainer for \"6f853f83bc650f285549376f7b55c012b4ea84ca447b69c467cacfc26379499e\"" Jan 30 06:18:26.595141 containerd[1500]: time="2025-01-30T06:18:26.592830088Z" level=info msg="StopPodSandbox for \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\"" Jan 30 06:18:26.595141 containerd[1500]: time="2025-01-30T06:18:26.592910158Z" level=info msg="Container to stop \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 06:18:26.597640 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08-shm.mount: Deactivated successfully. Jan 30 06:18:26.613336 systemd[1]: cri-containerd-37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08.scope: Deactivated successfully. Jan 30 06:18:26.638232 systemd[1]: Started cri-containerd-6f853f83bc650f285549376f7b55c012b4ea84ca447b69c467cacfc26379499e.scope - libcontainer container 6f853f83bc650f285549376f7b55c012b4ea84ca447b69c467cacfc26379499e. Jan 30 06:18:26.676186 containerd[1500]: time="2025-01-30T06:18:26.675177443Z" level=info msg="StartContainer for \"6f853f83bc650f285549376f7b55c012b4ea84ca447b69c467cacfc26379499e\" returns successfully" Jan 30 06:18:26.687698 containerd[1500]: time="2025-01-30T06:18:26.687527250Z" level=info msg="shim disconnected" id=37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08 namespace=k8s.io Jan 30 06:18:26.687698 containerd[1500]: time="2025-01-30T06:18:26.687583536Z" level=warning msg="cleaning up after shim disconnected" id=37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08 namespace=k8s.io Jan 30 06:18:26.687698 containerd[1500]: time="2025-01-30T06:18:26.687592532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:18:26.710999 containerd[1500]: time="2025-01-30T06:18:26.710872743Z" level=info msg="TearDown network for sandbox \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\" successfully" Jan 30 06:18:26.710999 containerd[1500]: time="2025-01-30T06:18:26.710910173Z" level=info msg="StopPodSandbox for \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\" returns successfully" Jan 30 06:18:26.784356 systemd[1]: cri-containerd-6f853f83bc650f285549376f7b55c012b4ea84ca447b69c467cacfc26379499e.scope: Deactivated successfully. Jan 30 06:18:26.819514 containerd[1500]: time="2025-01-30T06:18:26.819459458Z" level=info msg="shim disconnected" id=6f853f83bc650f285549376f7b55c012b4ea84ca447b69c467cacfc26379499e namespace=k8s.io Jan 30 06:18:26.819514 containerd[1500]: time="2025-01-30T06:18:26.819510254Z" level=warning msg="cleaning up after shim disconnected" id=6f853f83bc650f285549376f7b55c012b4ea84ca447b69c467cacfc26379499e namespace=k8s.io Jan 30 06:18:26.819514 containerd[1500]: time="2025-01-30T06:18:26.819518549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:18:26.883525 kubelet[2655]: I0130 06:18:26.883486 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-tigera-ca-bundle\") pod \"41bb3004-e4ad-4cc9-a7de-dc2d30b14a04\" (UID: \"41bb3004-e4ad-4cc9-a7de-dc2d30b14a04\") " Jan 30 06:18:26.883854 kubelet[2655]: I0130 06:18:26.883776 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-typha-certs\") pod \"41bb3004-e4ad-4cc9-a7de-dc2d30b14a04\" (UID: \"41bb3004-e4ad-4cc9-a7de-dc2d30b14a04\") " Jan 30 06:18:26.883854 kubelet[2655]: I0130 06:18:26.883851 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msf5n\" (UniqueName: \"kubernetes.io/projected/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-kube-api-access-msf5n\") pod \"41bb3004-e4ad-4cc9-a7de-dc2d30b14a04\" (UID: \"41bb3004-e4ad-4cc9-a7de-dc2d30b14a04\") " Jan 30 06:18:26.892442 kubelet[2655]: I0130 06:18:26.892407 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "41bb3004-e4ad-4cc9-a7de-dc2d30b14a04" (UID: "41bb3004-e4ad-4cc9-a7de-dc2d30b14a04"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 06:18:26.893610 kubelet[2655]: I0130 06:18:26.893575 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "41bb3004-e4ad-4cc9-a7de-dc2d30b14a04" (UID: "41bb3004-e4ad-4cc9-a7de-dc2d30b14a04"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 06:18:26.894988 kubelet[2655]: I0130 06:18:26.894957 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-kube-api-access-msf5n" (OuterVolumeSpecName: "kube-api-access-msf5n") pod "41bb3004-e4ad-4cc9-a7de-dc2d30b14a04" (UID: "41bb3004-e4ad-4cc9-a7de-dc2d30b14a04"). InnerVolumeSpecName "kube-api-access-msf5n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 06:18:26.944868 kubelet[2655]: I0130 06:18:26.944264 2655 memory_manager.go:355] "RemoveStaleState removing state" podUID="41bb3004-e4ad-4cc9-a7de-dc2d30b14a04" containerName="calico-typha" Jan 30 06:18:26.944868 kubelet[2655]: I0130 06:18:26.944292 2655 memory_manager.go:355] "RemoveStaleState removing state" podUID="41640a56-2c47-4ec4-99af-6f90c868637c" containerName="calico-kube-controllers" Jan 30 06:18:26.964462 systemd[1]: Created slice kubepods-besteffort-podcc020a87_ab09_4220_aff4_92ddd14a9d92.slice - libcontainer container kubepods-besteffort-podcc020a87_ab09_4220_aff4_92ddd14a9d92.slice. Jan 30 06:18:26.985163 kubelet[2655]: I0130 06:18:26.985019 2655 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-msf5n\" (UniqueName: \"kubernetes.io/projected/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-kube-api-access-msf5n\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.985163 kubelet[2655]: I0130 06:18:26.985092 2655 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-tigera-ca-bundle\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:26.985163 kubelet[2655]: I0130 06:18:26.985137 2655 reconciler_common.go:299] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04-typha-certs\") on node \"ci-4081-3-0-a-a10ab07ed7\" DevicePath \"\"" Jan 30 06:18:27.085813 kubelet[2655]: I0130 06:18:27.085765 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d294z\" (UniqueName: \"kubernetes.io/projected/cc020a87-ab09-4220-aff4-92ddd14a9d92-kube-api-access-d294z\") pod \"calico-typha-59d7c65ccd-46phg\" (UID: \"cc020a87-ab09-4220-aff4-92ddd14a9d92\") " pod="calico-system/calico-typha-59d7c65ccd-46phg" Jan 30 06:18:27.085813 kubelet[2655]: I0130 06:18:27.085819 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc020a87-ab09-4220-aff4-92ddd14a9d92-tigera-ca-bundle\") pod \"calico-typha-59d7c65ccd-46phg\" (UID: \"cc020a87-ab09-4220-aff4-92ddd14a9d92\") " pod="calico-system/calico-typha-59d7c65ccd-46phg" Jan 30 06:18:27.085813 kubelet[2655]: I0130 06:18:27.085844 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc020a87-ab09-4220-aff4-92ddd14a9d92-typha-certs\") pod \"calico-typha-59d7c65ccd-46phg\" (UID: \"cc020a87-ab09-4220-aff4-92ddd14a9d92\") " pod="calico-system/calico-typha-59d7c65ccd-46phg" Jan 30 06:18:27.138816 kubelet[2655]: I0130 06:18:27.138778 2655 scope.go:117] "RemoveContainer" containerID="781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee" Jan 30 06:18:27.143637 containerd[1500]: time="2025-01-30T06:18:27.143569259Z" level=info msg="RemoveContainer for \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\"" Jan 30 06:18:27.149973 containerd[1500]: time="2025-01-30T06:18:27.149926709Z" level=info msg="RemoveContainer for \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\" returns successfully" Jan 30 06:18:27.152974 containerd[1500]: time="2025-01-30T06:18:27.152781212Z" level=info msg="CreateContainer within sandbox \"8f092c5cf9ebbd40b0613388cfb16549e51b5f761ea329c06b12e76fcb3ecfe6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 06:18:27.153780 kubelet[2655]: I0130 06:18:27.153426 2655 scope.go:117] "RemoveContainer" containerID="781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee" Jan 30 06:18:27.153912 containerd[1500]: time="2025-01-30T06:18:27.153592402Z" level=error msg="ContainerStatus for \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\": not found" Jan 30 06:18:27.155581 kubelet[2655]: E0130 06:18:27.154408 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\": not found" containerID="781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee" Jan 30 06:18:27.155581 kubelet[2655]: I0130 06:18:27.154446 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee"} err="failed to get container status \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"781844dc09934a9315f4cd79e4100a72b196edc99a9c938b7964605a911746ee\": not found" Jan 30 06:18:27.156465 systemd[1]: Removed slice kubepods-besteffort-pod41bb3004_e4ad_4cc9_a7de_dc2d30b14a04.slice - libcontainer container kubepods-besteffort-pod41bb3004_e4ad_4cc9_a7de_dc2d30b14a04.slice. Jan 30 06:18:27.180132 containerd[1500]: time="2025-01-30T06:18:27.180053338Z" level=info msg="CreateContainer within sandbox \"8f092c5cf9ebbd40b0613388cfb16549e51b5f761ea329c06b12e76fcb3ecfe6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3f403de3886f0d4f52d0882bb7ad2beb2ef25ff60d0f090c0d86352c66f59833\"" Jan 30 06:18:27.180652 containerd[1500]: time="2025-01-30T06:18:27.180624249Z" level=info msg="StartContainer for \"3f403de3886f0d4f52d0882bb7ad2beb2ef25ff60d0f090c0d86352c66f59833\"" Jan 30 06:18:27.223260 systemd[1]: Started cri-containerd-3f403de3886f0d4f52d0882bb7ad2beb2ef25ff60d0f090c0d86352c66f59833.scope - libcontainer container 3f403de3886f0d4f52d0882bb7ad2beb2ef25ff60d0f090c0d86352c66f59833. Jan 30 06:18:27.253700 containerd[1500]: time="2025-01-30T06:18:27.252933222Z" level=info msg="StartContainer for \"3f403de3886f0d4f52d0882bb7ad2beb2ef25ff60d0f090c0d86352c66f59833\" returns successfully" Jan 30 06:18:27.270514 containerd[1500]: time="2025-01-30T06:18:27.270462150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59d7c65ccd-46phg,Uid:cc020a87-ab09-4220-aff4-92ddd14a9d92,Namespace:calico-system,Attempt:0,}" Jan 30 06:18:27.292332 containerd[1500]: time="2025-01-30T06:18:27.291990566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:18:27.292332 containerd[1500]: time="2025-01-30T06:18:27.292235115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:18:27.292332 containerd[1500]: time="2025-01-30T06:18:27.292264420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:18:27.292773 containerd[1500]: time="2025-01-30T06:18:27.292692393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:18:27.314589 systemd[1]: Started cri-containerd-7213d863f9b52f82cdc5164a3be3c191414d86ba873a89bc48ea3edd058bc589.scope - libcontainer container 7213d863f9b52f82cdc5164a3be3c191414d86ba873a89bc48ea3edd058bc589. Jan 30 06:18:27.360775 containerd[1500]: time="2025-01-30T06:18:27.360724566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59d7c65ccd-46phg,Uid:cc020a87-ab09-4220-aff4-92ddd14a9d92,Namespace:calico-system,Attempt:0,} returns sandbox id \"7213d863f9b52f82cdc5164a3be3c191414d86ba873a89bc48ea3edd058bc589\"" Jan 30 06:18:27.373669 containerd[1500]: time="2025-01-30T06:18:27.373631858Z" level=info msg="CreateContainer within sandbox \"7213d863f9b52f82cdc5164a3be3c191414d86ba873a89bc48ea3edd058bc589\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 06:18:27.383609 containerd[1500]: time="2025-01-30T06:18:27.383530820Z" level=info msg="CreateContainer within sandbox \"7213d863f9b52f82cdc5164a3be3c191414d86ba873a89bc48ea3edd058bc589\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"362bd9ab2b65894f6a9ec31c142227db4fe746a3894098aa696533a5f13784bb\"" Jan 30 06:18:27.384523 containerd[1500]: time="2025-01-30T06:18:27.384478375Z" level=info msg="StartContainer for \"362bd9ab2b65894f6a9ec31c142227db4fe746a3894098aa696533a5f13784bb\"" Jan 30 06:18:27.413277 systemd[1]: Started cri-containerd-362bd9ab2b65894f6a9ec31c142227db4fe746a3894098aa696533a5f13784bb.scope - libcontainer container 362bd9ab2b65894f6a9ec31c142227db4fe746a3894098aa696533a5f13784bb. Jan 30 06:18:27.473253 systemd[1]: var-lib-kubelet-pods-41bb3004\x2de4ad\x2d4cc9\x2da7de\x2ddc2d30b14a04-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 30 06:18:27.473609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08-rootfs.mount: Deactivated successfully. Jan 30 06:18:27.473928 systemd[1]: var-lib-kubelet-pods-41bb3004\x2de4ad\x2d4cc9\x2da7de\x2ddc2d30b14a04-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmsf5n.mount: Deactivated successfully. Jan 30 06:18:27.474323 systemd[1]: var-lib-kubelet-pods-41bb3004\x2de4ad\x2d4cc9\x2da7de\x2ddc2d30b14a04-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 30 06:18:27.487947 containerd[1500]: time="2025-01-30T06:18:27.487856024Z" level=info msg="StartContainer for \"362bd9ab2b65894f6a9ec31c142227db4fe746a3894098aa696533a5f13784bb\" returns successfully" Jan 30 06:18:27.737235 kubelet[2655]: I0130 06:18:27.737137 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41640a56-2c47-4ec4-99af-6f90c868637c" path="/var/lib/kubelet/pods/41640a56-2c47-4ec4-99af-6f90c868637c/volumes" Jan 30 06:18:27.738895 kubelet[2655]: I0130 06:18:27.738647 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41bb3004-e4ad-4cc9-a7de-dc2d30b14a04" path="/var/lib/kubelet/pods/41bb3004-e4ad-4cc9-a7de-dc2d30b14a04/volumes" Jan 30 06:18:27.740620 kubelet[2655]: I0130 06:18:27.740601 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b87580-46dc-4595-a2b2-8b2f0959f962" path="/var/lib/kubelet/pods/54b87580-46dc-4595-a2b2-8b2f0959f962/volumes" Jan 30 06:18:28.202439 kubelet[2655]: I0130 06:18:28.202382 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59d7c65ccd-46phg" podStartSLOduration=3.202365597 podStartE2EDuration="3.202365597s" podCreationTimestamp="2025-01-30 06:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 06:18:28.201896839 +0000 UTC m=+70.577467585" watchObservedRunningTime="2025-01-30 06:18:28.202365597 +0000 UTC m=+70.577936333" Jan 30 06:18:28.443171 systemd[1]: cri-containerd-3f403de3886f0d4f52d0882bb7ad2beb2ef25ff60d0f090c0d86352c66f59833.scope: Deactivated successfully. Jan 30 06:18:28.444944 containerd[1500]: time="2025-01-30T06:18:28.444801609Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jan 30 06:18:28.486052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f403de3886f0d4f52d0882bb7ad2beb2ef25ff60d0f090c0d86352c66f59833-rootfs.mount: Deactivated successfully. Jan 30 06:18:28.490913 containerd[1500]: time="2025-01-30T06:18:28.490819285Z" level=info msg="shim disconnected" id=3f403de3886f0d4f52d0882bb7ad2beb2ef25ff60d0f090c0d86352c66f59833 namespace=k8s.io Jan 30 06:18:28.490913 containerd[1500]: time="2025-01-30T06:18:28.490866293Z" level=warning msg="cleaning up after shim disconnected" id=3f403de3886f0d4f52d0882bb7ad2beb2ef25ff60d0f090c0d86352c66f59833 namespace=k8s.io Jan 30 06:18:28.490913 containerd[1500]: time="2025-01-30T06:18:28.490882764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:18:29.188189 containerd[1500]: time="2025-01-30T06:18:29.188096342Z" level=info msg="CreateContainer within sandbox \"8f092c5cf9ebbd40b0613388cfb16549e51b5f761ea329c06b12e76fcb3ecfe6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 06:18:29.207500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3444855499.mount: Deactivated successfully. Jan 30 06:18:29.214574 containerd[1500]: time="2025-01-30T06:18:29.213464719Z" level=info msg="CreateContainer within sandbox \"8f092c5cf9ebbd40b0613388cfb16549e51b5f761ea329c06b12e76fcb3ecfe6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5fca95bfec12b484a3cc7c95014642b2c5cfbbd473334708cc73c33ba3aa2025\"" Jan 30 06:18:29.214754 containerd[1500]: time="2025-01-30T06:18:29.214646775Z" level=info msg="StartContainer for \"5fca95bfec12b484a3cc7c95014642b2c5cfbbd473334708cc73c33ba3aa2025\"" Jan 30 06:18:29.244258 systemd[1]: Started cri-containerd-5fca95bfec12b484a3cc7c95014642b2c5cfbbd473334708cc73c33ba3aa2025.scope - libcontainer container 5fca95bfec12b484a3cc7c95014642b2c5cfbbd473334708cc73c33ba3aa2025. Jan 30 06:18:29.290359 containerd[1500]: time="2025-01-30T06:18:29.290269644Z" level=info msg="StartContainer for \"5fca95bfec12b484a3cc7c95014642b2c5cfbbd473334708cc73c33ba3aa2025\" returns successfully" Jan 30 06:18:29.408929 systemd[1]: Created slice kubepods-besteffort-podb8885521_f614_43e3_a23a_ba75169d72cb.slice - libcontainer container kubepods-besteffort-podb8885521_f614_43e3_a23a_ba75169d72cb.slice. Jan 30 06:18:29.503840 kubelet[2655]: I0130 06:18:29.503687 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8885521-f614-43e3-a23a-ba75169d72cb-tigera-ca-bundle\") pod \"calico-kube-controllers-6c94769dd-jvkcd\" (UID: \"b8885521-f614-43e3-a23a-ba75169d72cb\") " pod="calico-system/calico-kube-controllers-6c94769dd-jvkcd" Jan 30 06:18:29.504510 kubelet[2655]: I0130 06:18:29.504411 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xttsz\" (UniqueName: \"kubernetes.io/projected/b8885521-f614-43e3-a23a-ba75169d72cb-kube-api-access-xttsz\") pod \"calico-kube-controllers-6c94769dd-jvkcd\" (UID: \"b8885521-f614-43e3-a23a-ba75169d72cb\") " pod="calico-system/calico-kube-controllers-6c94769dd-jvkcd" Jan 30 06:18:29.720307 containerd[1500]: time="2025-01-30T06:18:29.720255847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c94769dd-jvkcd,Uid:b8885521-f614-43e3-a23a-ba75169d72cb,Namespace:calico-system,Attempt:0,}" Jan 30 06:18:29.870647 systemd-networkd[1401]: cali92ed67700f1: Link UP Jan 30 06:18:29.870852 systemd-networkd[1401]: cali92ed67700f1: Gained carrier Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.784 [INFO][5880] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0 calico-kube-controllers-6c94769dd- calico-system b8885521-f614-43e3-a23a-ba75169d72cb 1046 0 2025-01-30 06:18:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c94769dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-a-a10ab07ed7 calico-kube-controllers-6c94769dd-jvkcd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali92ed67700f1 [] []}} ContainerID="273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" Namespace="calico-system" Pod="calico-kube-controllers-6c94769dd-jvkcd" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.784 [INFO][5880] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" Namespace="calico-system" Pod="calico-kube-controllers-6c94769dd-jvkcd" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.818 [INFO][5892] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" HandleID="k8s-pod-network.273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.829 [INFO][5892] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" HandleID="k8s-pod-network.273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000382650), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-a-a10ab07ed7", "pod":"calico-kube-controllers-6c94769dd-jvkcd", "timestamp":"2025-01-30 06:18:29.818588467 +0000 UTC"}, Hostname:"ci-4081-3-0-a-a10ab07ed7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.829 [INFO][5892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.829 [INFO][5892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.829 [INFO][5892] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a-a10ab07ed7' Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.832 [INFO][5892] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.838 [INFO][5892] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.844 [INFO][5892] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.845 [INFO][5892] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.849 [INFO][5892] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.849 [INFO][5892] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.851 [INFO][5892] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.855 [INFO][5892] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.863 [INFO][5892] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.199/26] block=192.168.94.192/26 handle="k8s-pod-network.273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.863 [INFO][5892] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.199/26] handle="k8s-pod-network.273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" host="ci-4081-3-0-a-a10ab07ed7" Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.863 [INFO][5892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:18:29.885837 containerd[1500]: 2025-01-30 06:18:29.863 [INFO][5892] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.199/26] IPv6=[] ContainerID="273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" HandleID="k8s-pod-network.273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0" Jan 30 06:18:29.887255 containerd[1500]: 2025-01-30 06:18:29.866 [INFO][5880] cni-plugin/k8s.go 386: Populated endpoint ContainerID="273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" Namespace="calico-system" Pod="calico-kube-controllers-6c94769dd-jvkcd" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0", GenerateName:"calico-kube-controllers-6c94769dd-", Namespace:"calico-system", SelfLink:"", UID:"b8885521-f614-43e3-a23a-ba75169d72cb", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 18, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c94769dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"", Pod:"calico-kube-controllers-6c94769dd-jvkcd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92ed67700f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:29.887255 containerd[1500]: 2025-01-30 06:18:29.866 [INFO][5880] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.199/32] ContainerID="273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" Namespace="calico-system" Pod="calico-kube-controllers-6c94769dd-jvkcd" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0" Jan 30 06:18:29.887255 containerd[1500]: 2025-01-30 06:18:29.866 [INFO][5880] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92ed67700f1 ContainerID="273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" Namespace="calico-system" Pod="calico-kube-controllers-6c94769dd-jvkcd" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0" Jan 30 06:18:29.887255 containerd[1500]: 2025-01-30 06:18:29.869 [INFO][5880] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" Namespace="calico-system" Pod="calico-kube-controllers-6c94769dd-jvkcd" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0" Jan 30 06:18:29.887255 containerd[1500]: 2025-01-30 06:18:29.869 [INFO][5880] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" Namespace="calico-system" Pod="calico-kube-controllers-6c94769dd-jvkcd" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0", GenerateName:"calico-kube-controllers-6c94769dd-", Namespace:"calico-system", SelfLink:"", UID:"b8885521-f614-43e3-a23a-ba75169d72cb", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 6, 18, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c94769dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a-a10ab07ed7", ContainerID:"273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a", Pod:"calico-kube-controllers-6c94769dd-jvkcd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92ed67700f1", MAC:"52:93:11:51:ee:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 06:18:29.887255 containerd[1500]: 2025-01-30 06:18:29.879 [INFO][5880] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a" Namespace="calico-system" Pod="calico-kube-controllers-6c94769dd-jvkcd" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6c94769dd--jvkcd-eth0" Jan 30 06:18:29.915057 containerd[1500]: time="2025-01-30T06:18:29.914931596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 06:18:29.915627 containerd[1500]: time="2025-01-30T06:18:29.915511864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 06:18:29.915627 containerd[1500]: time="2025-01-30T06:18:29.915532703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:18:29.919212 containerd[1500]: time="2025-01-30T06:18:29.917387421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 06:18:29.939234 systemd[1]: Started cri-containerd-273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a.scope - libcontainer container 273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a. Jan 30 06:18:29.980964 containerd[1500]: time="2025-01-30T06:18:29.980925790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c94769dd-jvkcd,Uid:b8885521-f614-43e3-a23a-ba75169d72cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a\"" Jan 30 06:18:29.988390 containerd[1500]: time="2025-01-30T06:18:29.988348367Z" level=info msg="CreateContainer within sandbox \"273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 06:18:30.000053 containerd[1500]: time="2025-01-30T06:18:30.000013180Z" level=info msg="CreateContainer within sandbox \"273d748b52225ade9db0b7c046bd9fd4dd62d53effcb057ff359b36b74d2aa7a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4af9ed8bf95b5830c95f2fad94b193aeb365d58f9459466debb736cc2c2a5694\"" Jan 30 06:18:30.001762 containerd[1500]: time="2025-01-30T06:18:30.001736892Z" level=info msg="StartContainer for \"4af9ed8bf95b5830c95f2fad94b193aeb365d58f9459466debb736cc2c2a5694\"" Jan 30 06:18:30.042312 systemd[1]: Started cri-containerd-4af9ed8bf95b5830c95f2fad94b193aeb365d58f9459466debb736cc2c2a5694.scope - libcontainer container 4af9ed8bf95b5830c95f2fad94b193aeb365d58f9459466debb736cc2c2a5694. Jan 30 06:18:30.091823 containerd[1500]: time="2025-01-30T06:18:30.091677958Z" level=info msg="StartContainer for \"4af9ed8bf95b5830c95f2fad94b193aeb365d58f9459466debb736cc2c2a5694\" returns successfully" Jan 30 06:18:30.184072 kubelet[2655]: I0130 06:18:30.182848 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c94769dd-jvkcd" podStartSLOduration=4.182726091 podStartE2EDuration="4.182726091s" podCreationTimestamp="2025-01-30 06:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 06:18:30.179960224 +0000 UTC m=+72.555530961" watchObservedRunningTime="2025-01-30 06:18:30.182726091 +0000 UTC m=+72.558296857" Jan 30 06:18:31.821603 systemd-networkd[1401]: cali92ed67700f1: Gained IPv6LL Jan 30 06:18:39.288156 kubelet[2655]: I0130 06:18:39.288062 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:18:39.352419 kubelet[2655]: I0130 06:18:39.351506 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x9tdw" podStartSLOduration=14.351490823 podStartE2EDuration="14.351490823s" podCreationTimestamp="2025-01-30 06:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 06:18:30.229099273 +0000 UTC m=+72.604670009" watchObservedRunningTime="2025-01-30 06:18:39.351490823 +0000 UTC m=+81.727061569" Jan 30 06:19:01.250228 systemd[1]: run-containerd-runc-k8s.io-4af9ed8bf95b5830c95f2fad94b193aeb365d58f9459466debb736cc2c2a5694-runc.164r6S.mount: Deactivated successfully. Jan 30 06:19:18.647304 containerd[1500]: time="2025-01-30T06:19:18.631957998Z" level=info msg="StopPodSandbox for \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\"" Jan 30 06:19:18.648021 containerd[1500]: time="2025-01-30T06:19:18.647338610Z" level=info msg="TearDown network for sandbox \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" successfully" Jan 30 06:19:18.648021 containerd[1500]: time="2025-01-30T06:19:18.647355131Z" level=info msg="StopPodSandbox for \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" returns successfully" Jan 30 06:19:18.661646 containerd[1500]: time="2025-01-30T06:19:18.661606116Z" level=info msg="RemovePodSandbox for \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\"" Jan 30 06:19:18.665059 containerd[1500]: time="2025-01-30T06:19:18.663277130Z" level=info msg="Forcibly stopping sandbox \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\"" Jan 30 06:19:18.665059 containerd[1500]: time="2025-01-30T06:19:18.663375724Z" level=info msg="TearDown network for sandbox \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" successfully" Jan 30 06:19:18.685672 containerd[1500]: time="2025-01-30T06:19:18.685629135Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 06:19:18.685742 containerd[1500]: time="2025-01-30T06:19:18.685682375Z" level=info msg="RemovePodSandbox \"59adf8d7f6d20201d9f41ab4fe1d72cda38b00621a24d88e5a8d9039d97312d8\" returns successfully" Jan 30 06:19:18.686210 containerd[1500]: time="2025-01-30T06:19:18.686012123Z" level=info msg="StopPodSandbox for \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\"" Jan 30 06:19:18.686210 containerd[1500]: time="2025-01-30T06:19:18.686072507Z" level=info msg="TearDown network for sandbox \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\" successfully" Jan 30 06:19:18.686210 containerd[1500]: time="2025-01-30T06:19:18.686081844Z" level=info msg="StopPodSandbox for \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\" returns successfully" Jan 30 06:19:18.686409 containerd[1500]: time="2025-01-30T06:19:18.686375725Z" level=info msg="RemovePodSandbox for \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\"" Jan 30 06:19:18.686409 containerd[1500]: time="2025-01-30T06:19:18.686401894Z" level=info msg="Forcibly stopping sandbox \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\"" Jan 30 06:19:18.686516 containerd[1500]: time="2025-01-30T06:19:18.686492754Z" level=info msg="TearDown network for sandbox \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\" successfully" Jan 30 06:19:18.689969 containerd[1500]: time="2025-01-30T06:19:18.689938706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 06:19:18.690029 containerd[1500]: time="2025-01-30T06:19:18.689991836Z" level=info msg="RemovePodSandbox \"37fca1cf896ca04ba5f6b743bf718c664ea1700fff53c97204ca17d4178c9c08\" returns successfully" Jan 30 06:19:18.690314 containerd[1500]: time="2025-01-30T06:19:18.690282782Z" level=info msg="StopPodSandbox for \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\"" Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.766 [WARNING][6929] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.767 [INFO][6929] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.767 [INFO][6929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" iface="eth0" netns="" Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.767 [INFO][6929] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.767 [INFO][6929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.824 [INFO][6935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.826 [INFO][6935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.826 [INFO][6935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.835 [WARNING][6935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.835 [INFO][6935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.836 [INFO][6935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:19:18.841891 containerd[1500]: 2025-01-30 06:19:18.839 [INFO][6929] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:19:18.843396 containerd[1500]: time="2025-01-30T06:19:18.841923477Z" level=info msg="TearDown network for sandbox \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\" successfully" Jan 30 06:19:18.843396 containerd[1500]: time="2025-01-30T06:19:18.841960517Z" level=info msg="StopPodSandbox for \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\" returns successfully" Jan 30 06:19:18.843396 containerd[1500]: time="2025-01-30T06:19:18.842390854Z" level=info msg="RemovePodSandbox for \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\"" Jan 30 06:19:18.843396 containerd[1500]: time="2025-01-30T06:19:18.842429357Z" level=info msg="Forcibly stopping sandbox \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\"" Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.874 [WARNING][6954] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" WorkloadEndpoint="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.874 [INFO][6954] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.874 [INFO][6954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" iface="eth0" netns="" Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.874 [INFO][6954] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.874 [INFO][6954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.901 [INFO][6960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.901 [INFO][6960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.901 [INFO][6960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.907 [WARNING][6960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.907 [INFO][6960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" HandleID="k8s-pod-network.f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Workload="ci--4081--3--0--a--a10ab07ed7-k8s-calico--kube--controllers--6b57545dfb--9fz6l-eth0" Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.909 [INFO][6960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 06:19:18.915327 containerd[1500]: 2025-01-30 06:19:18.912 [INFO][6954] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3" Jan 30 06:19:18.915327 containerd[1500]: time="2025-01-30T06:19:18.915280300Z" level=info msg="TearDown network for sandbox \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\" successfully" Jan 30 06:19:18.962665 containerd[1500]: time="2025-01-30T06:19:18.962601981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 06:19:18.962791 containerd[1500]: time="2025-01-30T06:19:18.962677253Z" level=info msg="RemovePodSandbox \"f4d6cf78f5e749cce96d96ebf9af0f5d2b809a64eb0c36a3820432ddf89f4eb3\" returns successfully" Jan 30 06:19:28.842704 systemd[1]: Started sshd@12-78.47.103.36:22-183.110.116.126:34076.service - OpenSSH per-connection server daemon (183.110.116.126:34076). Jan 30 06:19:29.753548 systemd[1]: run-containerd-runc-k8s.io-4af9ed8bf95b5830c95f2fad94b193aeb365d58f9459466debb736cc2c2a5694-runc.sFXFX3.mount: Deactivated successfully. Jan 30 06:19:31.249634 systemd[1]: run-containerd-runc-k8s.io-5fca95bfec12b484a3cc7c95014642b2c5cfbbd473334708cc73c33ba3aa2025-runc.BhSVq9.mount: Deactivated successfully. Jan 30 06:19:32.692715 sshd[6977]: Invalid user frontend from 183.110.116.126 port 34076 Jan 30 06:19:32.944658 sshd[6977]: Received disconnect from 183.110.116.126 port 34076:11: Bye Bye [preauth] Jan 30 06:19:32.944658 sshd[6977]: Disconnected from invalid user frontend 183.110.116.126 port 34076 [preauth] Jan 30 06:19:32.948095 systemd[1]: sshd@12-78.47.103.36:22-183.110.116.126:34076.service: Deactivated successfully. Jan 30 06:19:56.076378 systemd[1]: sshd@10-78.47.103.36:22-125.74.237.67:40922.service: Deactivated successfully. Jan 30 06:19:59.916481 systemd[1]: Started sshd@13-78.47.103.36:22-139.178.89.65:36158.service - OpenSSH per-connection server daemon (139.178.89.65:36158). Jan 30 06:20:00.939421 sshd[7057]: Accepted publickey for core from 139.178.89.65 port 36158 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:00.943851 sshd[7057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:00.962542 systemd-logind[1477]: New session 8 of user core. Jan 30 06:20:00.969447 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 06:20:01.208405 systemd[1]: run-containerd-runc-k8s.io-5fca95bfec12b484a3cc7c95014642b2c5cfbbd473334708cc73c33ba3aa2025-runc.BgebMQ.mount: Deactivated successfully. Jan 30 06:20:02.063300 sshd[7057]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:02.070438 systemd[1]: sshd@13-78.47.103.36:22-139.178.89.65:36158.service: Deactivated successfully. Jan 30 06:20:02.074679 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 06:20:02.077267 systemd-logind[1477]: Session 8 logged out. Waiting for processes to exit. Jan 30 06:20:02.078787 systemd-logind[1477]: Removed session 8. Jan 30 06:20:07.236380 systemd[1]: Started sshd@14-78.47.103.36:22-139.178.89.65:34548.service - OpenSSH per-connection server daemon (139.178.89.65:34548). Jan 30 06:20:08.242070 sshd[7113]: Accepted publickey for core from 139.178.89.65 port 34548 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:08.243813 sshd[7113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:08.248694 systemd-logind[1477]: New session 9 of user core. Jan 30 06:20:08.254232 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 06:20:08.999071 sshd[7113]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:09.003125 systemd[1]: sshd@14-78.47.103.36:22-139.178.89.65:34548.service: Deactivated successfully. Jan 30 06:20:09.005707 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 06:20:09.007663 systemd-logind[1477]: Session 9 logged out. Waiting for processes to exit. Jan 30 06:20:09.009082 systemd-logind[1477]: Removed session 9. Jan 30 06:20:14.173136 systemd[1]: Started sshd@15-78.47.103.36:22-139.178.89.65:58138.service - OpenSSH per-connection server daemon (139.178.89.65:58138). Jan 30 06:20:15.166071 sshd[7127]: Accepted publickey for core from 139.178.89.65 port 58138 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:15.167927 sshd[7127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:15.173996 systemd-logind[1477]: New session 10 of user core. Jan 30 06:20:15.178282 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 06:20:15.906719 sshd[7127]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:15.911231 systemd[1]: sshd@15-78.47.103.36:22-139.178.89.65:58138.service: Deactivated successfully. Jan 30 06:20:15.913582 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 06:20:15.914435 systemd-logind[1477]: Session 10 logged out. Waiting for processes to exit. Jan 30 06:20:15.915931 systemd-logind[1477]: Removed session 10. Jan 30 06:20:16.076377 systemd[1]: Started sshd@16-78.47.103.36:22-139.178.89.65:58152.service - OpenSSH per-connection server daemon (139.178.89.65:58152). Jan 30 06:20:17.043954 sshd[7142]: Accepted publickey for core from 139.178.89.65 port 58152 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:17.045711 sshd[7142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:17.050494 systemd-logind[1477]: New session 11 of user core. Jan 30 06:20:17.054284 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 06:20:17.833235 sshd[7142]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:17.837369 systemd[1]: sshd@16-78.47.103.36:22-139.178.89.65:58152.service: Deactivated successfully. Jan 30 06:20:17.840622 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 06:20:17.842702 systemd-logind[1477]: Session 11 logged out. Waiting for processes to exit. Jan 30 06:20:17.844033 systemd-logind[1477]: Removed session 11. Jan 30 06:20:18.005401 systemd[1]: Started sshd@17-78.47.103.36:22-139.178.89.65:58164.service - OpenSSH per-connection server daemon (139.178.89.65:58164). Jan 30 06:20:19.009623 sshd[7155]: Accepted publickey for core from 139.178.89.65 port 58164 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:19.011423 sshd[7155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:19.015941 systemd-logind[1477]: New session 12 of user core. Jan 30 06:20:19.023267 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 06:20:19.752294 sshd[7155]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:19.755578 systemd[1]: sshd@17-78.47.103.36:22-139.178.89.65:58164.service: Deactivated successfully. Jan 30 06:20:19.757995 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 06:20:19.760298 systemd-logind[1477]: Session 12 logged out. Waiting for processes to exit. Jan 30 06:20:19.761889 systemd-logind[1477]: Removed session 12. Jan 30 06:20:24.923553 systemd[1]: Started sshd@18-78.47.103.36:22-139.178.89.65:33878.service - OpenSSH per-connection server daemon (139.178.89.65:33878). Jan 30 06:20:25.915418 sshd[7182]: Accepted publickey for core from 139.178.89.65 port 33878 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:25.917709 sshd[7182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:25.924398 systemd-logind[1477]: New session 13 of user core. Jan 30 06:20:25.928277 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 06:20:26.652036 sshd[7182]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:26.656081 systemd-logind[1477]: Session 13 logged out. Waiting for processes to exit. Jan 30 06:20:26.656907 systemd[1]: sshd@18-78.47.103.36:22-139.178.89.65:33878.service: Deactivated successfully. Jan 30 06:20:26.659580 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 06:20:26.660556 systemd-logind[1477]: Removed session 13. Jan 30 06:20:26.827506 systemd[1]: Started sshd@19-78.47.103.36:22-139.178.89.65:33890.service - OpenSSH per-connection server daemon (139.178.89.65:33890). Jan 30 06:20:27.817359 sshd[7195]: Accepted publickey for core from 139.178.89.65 port 33890 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:27.819178 sshd[7195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:27.830227 systemd-logind[1477]: New session 14 of user core. Jan 30 06:20:27.837279 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 06:20:28.780501 sshd[7195]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:28.784494 systemd[1]: sshd@19-78.47.103.36:22-139.178.89.65:33890.service: Deactivated successfully. Jan 30 06:20:28.788638 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 06:20:28.791501 systemd-logind[1477]: Session 14 logged out. Waiting for processes to exit. Jan 30 06:20:28.792732 systemd-logind[1477]: Removed session 14. Jan 30 06:20:28.949612 systemd[1]: Started sshd@20-78.47.103.36:22-139.178.89.65:33902.service - OpenSSH per-connection server daemon (139.178.89.65:33902). Jan 30 06:20:29.941630 sshd[7206]: Accepted publickey for core from 139.178.89.65 port 33902 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:29.946695 sshd[7206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:29.960276 systemd-logind[1477]: New session 15 of user core. Jan 30 06:20:29.966269 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 06:20:31.243561 systemd[1]: run-containerd-runc-k8s.io-5fca95bfec12b484a3cc7c95014642b2c5cfbbd473334708cc73c33ba3aa2025-runc.aueWZS.mount: Deactivated successfully. Jan 30 06:20:31.685250 sshd[7206]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:31.691487 systemd[1]: sshd@20-78.47.103.36:22-139.178.89.65:33902.service: Deactivated successfully. Jan 30 06:20:31.694185 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 06:20:31.697380 systemd-logind[1477]: Session 15 logged out. Waiting for processes to exit. Jan 30 06:20:31.699374 systemd-logind[1477]: Removed session 15. Jan 30 06:20:31.862380 systemd[1]: Started sshd@21-78.47.103.36:22-139.178.89.65:55262.service - OpenSSH per-connection server daemon (139.178.89.65:55262). Jan 30 06:20:32.873063 sshd[7282]: Accepted publickey for core from 139.178.89.65 port 55262 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:32.875948 sshd[7282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:32.883360 systemd-logind[1477]: New session 16 of user core. Jan 30 06:20:32.890382 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 06:20:33.803678 sshd[7282]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:33.810845 systemd-logind[1477]: Session 16 logged out. Waiting for processes to exit. Jan 30 06:20:33.812021 systemd[1]: sshd@21-78.47.103.36:22-139.178.89.65:55262.service: Deactivated successfully. Jan 30 06:20:33.814654 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 06:20:33.816624 systemd-logind[1477]: Removed session 16. Jan 30 06:20:33.985686 systemd[1]: Started sshd@22-78.47.103.36:22-139.178.89.65:55278.service - OpenSSH per-connection server daemon (139.178.89.65:55278). Jan 30 06:20:34.987546 sshd[7292]: Accepted publickey for core from 139.178.89.65 port 55278 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:34.989609 sshd[7292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:34.996288 systemd-logind[1477]: New session 17 of user core. Jan 30 06:20:35.001689 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 06:20:35.744186 sshd[7292]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:35.753192 systemd[1]: sshd@22-78.47.103.36:22-139.178.89.65:55278.service: Deactivated successfully. Jan 30 06:20:35.754327 systemd-logind[1477]: Session 17 logged out. Waiting for processes to exit. Jan 30 06:20:35.760456 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 06:20:35.766300 systemd-logind[1477]: Removed session 17. Jan 30 06:20:40.921379 systemd[1]: Started sshd@23-78.47.103.36:22-139.178.89.65:55284.service - OpenSSH per-connection server daemon (139.178.89.65:55284). Jan 30 06:20:41.912826 sshd[7323]: Accepted publickey for core from 139.178.89.65 port 55284 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:41.914596 sshd[7323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:41.919267 systemd-logind[1477]: New session 18 of user core. Jan 30 06:20:41.926307 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 06:20:42.660945 sshd[7323]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:42.666396 systemd-logind[1477]: Session 18 logged out. Waiting for processes to exit. Jan 30 06:20:42.666537 systemd[1]: sshd@23-78.47.103.36:22-139.178.89.65:55284.service: Deactivated successfully. Jan 30 06:20:42.669301 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 06:20:42.670857 systemd-logind[1477]: Removed session 18. Jan 30 06:20:47.842651 systemd[1]: Started sshd@24-78.47.103.36:22-139.178.89.65:37808.service - OpenSSH per-connection server daemon (139.178.89.65:37808). Jan 30 06:20:48.859402 sshd[7336]: Accepted publickey for core from 139.178.89.65 port 37808 ssh2: RSA SHA256:tOGKYFU4gNvdonCHRhxkxlwCu2SjV/qSIk+LzO3WdFk Jan 30 06:20:48.862283 sshd[7336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 06:20:48.871326 systemd-logind[1477]: New session 19 of user core. Jan 30 06:20:48.876414 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 06:20:49.611366 sshd[7336]: pam_unix(sshd:session): session closed for user core Jan 30 06:20:49.616977 systemd[1]: sshd@24-78.47.103.36:22-139.178.89.65:37808.service: Deactivated successfully. Jan 30 06:20:49.619962 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 06:20:49.622004 systemd-logind[1477]: Session 19 logged out. Waiting for processes to exit. Jan 30 06:20:49.623402 systemd-logind[1477]: Removed session 19. Jan 30 06:20:53.111409 systemd[1]: Started sshd@25-78.47.103.36:22-183.110.116.126:60834.service - OpenSSH per-connection server daemon (183.110.116.126:60834). Jan 30 06:20:54.902304 sshd[7350]: Invalid user sanjeev from 183.110.116.126 port 60834 Jan 30 06:20:55.259471 sshd[7350]: Received disconnect from 183.110.116.126 port 60834:11: Bye Bye [preauth] Jan 30 06:20:55.259471 sshd[7350]: Disconnected from invalid user sanjeev 183.110.116.126 port 60834 [preauth] Jan 30 06:20:55.261916 systemd[1]: sshd@25-78.47.103.36:22-183.110.116.126:60834.service: Deactivated successfully. Jan 30 06:21:15.812222 systemd[1]: cri-containerd-161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c.scope: Deactivated successfully. Jan 30 06:21:15.812610 systemd[1]: cri-containerd-161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c.scope: Consumed 7.415s CPU time. Jan 30 06:21:15.968728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c-rootfs.mount: Deactivated successfully. Jan 30 06:21:16.045014 containerd[1500]: time="2025-01-30T06:21:16.004859356Z" level=info msg="shim disconnected" id=161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c namespace=k8s.io Jan 30 06:21:16.046051 containerd[1500]: time="2025-01-30T06:21:16.045544303Z" level=warning msg="cleaning up after shim disconnected" id=161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c namespace=k8s.io Jan 30 06:21:16.046051 containerd[1500]: time="2025-01-30T06:21:16.045576773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:21:16.267085 kubelet[2655]: E0130 06:21:16.267022 2655 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46466->10.0.0.2:2379: read: connection timed out" Jan 30 06:21:16.670835 kubelet[2655]: I0130 06:21:16.670776 2655 scope.go:117] "RemoveContainer" containerID="161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c" Jan 30 06:21:16.718611 containerd[1500]: time="2025-01-30T06:21:16.718552621Z" level=info msg="CreateContainer within sandbox \"4e8420e04f1315dd48794de349f3e9d077eb1ecfde9cb755a992acecec0f8fa3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 30 06:21:16.743323 systemd[1]: cri-containerd-a02ba7abdff144a8d6433537c14d5db7261e19b941d8d688a53e1cf7b1c6269f.scope: Deactivated successfully. Jan 30 06:21:16.743659 systemd[1]: cri-containerd-a02ba7abdff144a8d6433537c14d5db7261e19b941d8d688a53e1cf7b1c6269f.scope: Consumed 4.024s CPU time, 17.9M memory peak, 0B memory swap peak. Jan 30 06:21:16.785326 containerd[1500]: time="2025-01-30T06:21:16.782863103Z" level=info msg="shim disconnected" id=a02ba7abdff144a8d6433537c14d5db7261e19b941d8d688a53e1cf7b1c6269f namespace=k8s.io Jan 30 06:21:16.785326 containerd[1500]: time="2025-01-30T06:21:16.782933015Z" level=warning msg="cleaning up after shim disconnected" id=a02ba7abdff144a8d6433537c14d5db7261e19b941d8d688a53e1cf7b1c6269f namespace=k8s.io Jan 30 06:21:16.785326 containerd[1500]: time="2025-01-30T06:21:16.782946871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:21:16.789805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a02ba7abdff144a8d6433537c14d5db7261e19b941d8d688a53e1cf7b1c6269f-rootfs.mount: Deactivated successfully. Jan 30 06:21:16.848092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710846923.mount: Deactivated successfully. Jan 30 06:21:16.855032 containerd[1500]: time="2025-01-30T06:21:16.854986377Z" level=info msg="CreateContainer within sandbox \"4e8420e04f1315dd48794de349f3e9d077eb1ecfde9cb755a992acecec0f8fa3\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"413fce50013ba82656eabbcc2079738b6cd4c416f1c676a8c33428e084767652\"" Jan 30 06:21:16.855499 containerd[1500]: time="2025-01-30T06:21:16.855471817Z" level=info msg="StartContainer for \"413fce50013ba82656eabbcc2079738b6cd4c416f1c676a8c33428e084767652\"" Jan 30 06:21:16.906237 systemd[1]: Started cri-containerd-413fce50013ba82656eabbcc2079738b6cd4c416f1c676a8c33428e084767652.scope - libcontainer container 413fce50013ba82656eabbcc2079738b6cd4c416f1c676a8c33428e084767652. Jan 30 06:21:16.938382 containerd[1500]: time="2025-01-30T06:21:16.938257807Z" level=info msg="StartContainer for \"413fce50013ba82656eabbcc2079738b6cd4c416f1c676a8c33428e084767652\" returns successfully" Jan 30 06:21:17.666030 kubelet[2655]: I0130 06:21:17.665992 2655 scope.go:117] "RemoveContainer" containerID="a02ba7abdff144a8d6433537c14d5db7261e19b941d8d688a53e1cf7b1c6269f" Jan 30 06:21:17.672294 containerd[1500]: time="2025-01-30T06:21:17.672254529Z" level=info msg="CreateContainer within sandbox \"d8b6adcad9b2f3e0e1ad4ea1fbcedb8b188c11098d43fb36a608c4a214a20248\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 06:21:17.691983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521540307.mount: Deactivated successfully. Jan 30 06:21:17.703876 containerd[1500]: time="2025-01-30T06:21:17.703820214Z" level=info msg="CreateContainer within sandbox \"d8b6adcad9b2f3e0e1ad4ea1fbcedb8b188c11098d43fb36a608c4a214a20248\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"909e0e394c1ffd98bbe6d127d94da519e9216f019b46e7ba1e79cff30fe97bb5\"" Jan 30 06:21:17.704243 containerd[1500]: time="2025-01-30T06:21:17.704215045Z" level=info msg="StartContainer for \"909e0e394c1ffd98bbe6d127d94da519e9216f019b46e7ba1e79cff30fe97bb5\"" Jan 30 06:21:17.736231 systemd[1]: Started cri-containerd-909e0e394c1ffd98bbe6d127d94da519e9216f019b46e7ba1e79cff30fe97bb5.scope - libcontainer container 909e0e394c1ffd98bbe6d127d94da519e9216f019b46e7ba1e79cff30fe97bb5. Jan 30 06:21:17.782043 containerd[1500]: time="2025-01-30T06:21:17.781910828Z" level=info msg="StartContainer for \"909e0e394c1ffd98bbe6d127d94da519e9216f019b46e7ba1e79cff30fe97bb5\" returns successfully" Jan 30 06:21:18.292678 kubelet[2655]: E0130 06:21:18.289683 2655 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46300->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-0-a-a10ab07ed7.181f641f89486a5c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-0-a-a10ab07ed7,UID:ce9b9d4d856568bd0e8f4ea187be8b39,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-a-a10ab07ed7,},FirstTimestamp:2025-01-30 06:21:07.798288988 +0000 UTC m=+230.173859755,LastTimestamp:2025-01-30 06:21:07.798288988 +0000 UTC m=+230.173859755,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-a-a10ab07ed7,}" Jan 30 06:21:21.298812 systemd[1]: cri-containerd-413fce50013ba82656eabbcc2079738b6cd4c416f1c676a8c33428e084767652.scope: Deactivated successfully. Jan 30 06:21:21.339944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-413fce50013ba82656eabbcc2079738b6cd4c416f1c676a8c33428e084767652-rootfs.mount: Deactivated successfully. Jan 30 06:21:21.346835 containerd[1500]: time="2025-01-30T06:21:21.346722837Z" level=info msg="shim disconnected" id=413fce50013ba82656eabbcc2079738b6cd4c416f1c676a8c33428e084767652 namespace=k8s.io Jan 30 06:21:21.347528 containerd[1500]: time="2025-01-30T06:21:21.346850848Z" level=warning msg="cleaning up after shim disconnected" id=413fce50013ba82656eabbcc2079738b6cd4c416f1c676a8c33428e084767652 namespace=k8s.io Jan 30 06:21:21.347528 containerd[1500]: time="2025-01-30T06:21:21.346868701Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 06:21:21.683487 kubelet[2655]: I0130 06:21:21.683440 2655 scope.go:117] "RemoveContainer" containerID="161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c" Jan 30 06:21:21.684476 kubelet[2655]: I0130 06:21:21.684260 2655 scope.go:117] "RemoveContainer" containerID="413fce50013ba82656eabbcc2079738b6cd4c416f1c676a8c33428e084767652" Jan 30 06:21:21.684476 kubelet[2655]: E0130 06:21:21.684433 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7d68577dc5-5mlz7_tigera-operator(87e5e8ed-54a5-482b-b7b5-b932768f3a98)\"" pod="tigera-operator/tigera-operator-7d68577dc5-5mlz7" podUID="87e5e8ed-54a5-482b-b7b5-b932768f3a98" Jan 30 06:21:21.697737 containerd[1500]: time="2025-01-30T06:21:21.697686824Z" level=info msg="RemoveContainer for \"161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c\"" Jan 30 06:21:21.704792 containerd[1500]: time="2025-01-30T06:21:21.704754699Z" level=info msg="RemoveContainer for \"161d644eb29a8c4c4af6ed0d6b5df4a754577c48ac8d7cf8e61ebed3a39b5e7c\" returns successfully"