Jan 17 12:26:25.899245 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:26:25.899277 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:26:25.899286 kernel: BIOS-provided physical RAM map: Jan 17 12:26:25.899291 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:26:25.899296 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:26:25.899301 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:26:25.899307 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 17 12:26:25.899312 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 17 12:26:25.899329 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:26:25.899341 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 12:26:25.899347 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:26:25.899352 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:26:25.899357 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 12:26:25.899362 kernel: NX (Execute Disable) protection: active Jan 17 12:26:25.899371 kernel: APIC: Static calls initialized Jan 17 12:26:25.899376 kernel: SMBIOS 3.0.0 present. Jan 17 12:26:25.899382 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 17 12:26:25.899387 kernel: Hypervisor detected: KVM Jan 17 12:26:25.899393 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:26:25.899398 kernel: kvm-clock: using sched offset of 2951450628 cycles Jan 17 12:26:25.899404 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:26:25.899410 kernel: tsc: Detected 2445.404 MHz processor Jan 17 12:26:25.899416 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:26:25.899424 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:26:25.899429 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 17 12:26:25.899435 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:26:25.899441 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:26:25.899447 kernel: Using GB pages for direct mapping Jan 17 12:26:25.899452 kernel: ACPI: Early table checksum verification disabled Jan 17 12:26:25.899458 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 17 12:26:25.899463 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:26:25.899469 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:26:25.899476 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:26:25.899482 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 17 12:26:25.899488 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:26:25.899493 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:26:25.899499 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:26:25.899504 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:26:25.899510 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 17 12:26:25.899516 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 17 12:26:25.899526 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 17 12:26:25.899532 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 17 12:26:25.899538 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 17 12:26:25.899554 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 17 12:26:25.899560 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 17 12:26:25.899565 kernel: No NUMA configuration found Jan 17 12:26:25.899574 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 17 12:26:25.899580 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 17 12:26:25.899586 kernel: Zone ranges: Jan 17 12:26:25.899592 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:26:25.899597 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 17 12:26:25.899603 kernel: Normal empty Jan 17 12:26:25.899609 kernel: Movable zone start for each node Jan 17 12:26:25.899615 kernel: Early memory node ranges Jan 17 12:26:25.899621 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:26:25.899626 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 17 12:26:25.899634 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 17 12:26:25.899640 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:26:25.899646 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:26:25.899652 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 17 12:26:25.899657 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:26:25.899663 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:26:25.899669 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:26:25.899675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:26:25.899681 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:26:25.899689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:26:25.899695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:26:25.899701 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:26:25.899707 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:26:25.899713 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:26:25.899718 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:26:25.899724 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:26:25.899730 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 12:26:25.899736 kernel: Booting paravirtualized kernel on KVM Jan 17 12:26:25.899744 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:26:25.899750 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:26:25.899756 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:26:25.899761 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:26:25.899767 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:26:25.899773 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:26:25.899780 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:26:25.899786 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:26:25.899794 kernel: random: crng init done Jan 17 12:26:25.899800 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:26:25.899806 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:26:25.899811 kernel: Fallback order for Node 0: 0 Jan 17 12:26:25.899817 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 17 12:26:25.899823 kernel: Policy zone: DMA32 Jan 17 12:26:25.899829 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:26:25.899835 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125152K reserved, 0K cma-reserved) Jan 17 12:26:25.899841 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:26:25.899849 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:26:25.899855 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:26:25.899861 kernel: Dynamic Preempt: voluntary Jan 17 12:26:25.899867 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:26:25.899873 kernel: rcu: RCU event tracing is enabled. Jan 17 12:26:25.899879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:26:25.899885 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:26:25.899891 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:26:25.899897 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:26:25.899905 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:26:25.899911 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:26:25.899917 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:26:25.899923 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:26:25.899929 kernel: Console: colour VGA+ 80x25 Jan 17 12:26:25.899935 kernel: printk: console [tty0] enabled Jan 17 12:26:25.899940 kernel: printk: console [ttyS0] enabled Jan 17 12:26:25.899946 kernel: ACPI: Core revision 20230628 Jan 17 12:26:25.899952 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:26:25.899958 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:26:25.899966 kernel: x2apic enabled Jan 17 12:26:25.899972 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:26:25.899978 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:26:25.899984 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:26:25.899989 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Jan 17 12:26:25.899995 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:26:25.900001 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 12:26:25.900007 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 12:26:25.900022 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:26:25.900028 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:26:25.900034 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:26:25.900042 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:26:25.900049 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 17 12:26:25.900055 kernel: RETBleed: Mitigation: untrained return thunk Jan 17 12:26:25.900061 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:26:25.900067 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:26:25.900073 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 12:26:25.900082 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 12:26:25.900088 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 12:26:25.900095 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:26:25.900101 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:26:25.900107 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:26:25.900113 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:26:25.900119 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:26:25.900128 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:26:25.900134 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:26:25.900140 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:26:25.900146 kernel: landlock: Up and running. Jan 17 12:26:25.900152 kernel: SELinux: Initializing. Jan 17 12:26:25.900158 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:26:25.900165 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:26:25.900171 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 17 12:26:25.900190 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:26:25.900199 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:26:25.900206 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:26:25.900212 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 12:26:25.900218 kernel: ... version: 0 Jan 17 12:26:25.900224 kernel: ... bit width: 48 Jan 17 12:26:25.900230 kernel: ... generic registers: 6 Jan 17 12:26:25.900236 kernel: ... value mask: 0000ffffffffffff Jan 17 12:26:25.900243 kernel: ... max period: 00007fffffffffff Jan 17 12:26:25.900249 kernel: ... fixed-purpose events: 0 Jan 17 12:26:25.900257 kernel: ... event mask: 000000000000003f Jan 17 12:26:25.900263 kernel: signal: max sigframe size: 1776 Jan 17 12:26:25.900269 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:26:25.900275 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:26:25.900282 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:26:25.900288 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:26:25.900294 kernel: .... node #0, CPUs: #1 Jan 17 12:26:25.900307 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:26:25.900340 kernel: smpboot: Max logical packages: 1 Jan 17 12:26:25.900347 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Jan 17 12:26:25.900356 kernel: devtmpfs: initialized Jan 17 12:26:25.900362 kernel: x86/mm: Memory block size: 128MB Jan 17 12:26:25.900368 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:26:25.900374 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:26:25.900381 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:26:25.900387 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:26:25.900393 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:26:25.900399 kernel: audit: type=2000 audit(1737116784.300:1): state=initialized audit_enabled=0 res=1 Jan 17 12:26:25.900405 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:26:25.900417 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:26:25.900423 kernel: cpuidle: using governor menu Jan 17 12:26:25.900429 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:26:25.900436 kernel: dca service started, version 1.12.1 Jan 17 12:26:25.900442 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:26:25.900448 kernel: PCI: Using configuration type 1 for base access Jan 17 12:26:25.900454 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:26:25.900460 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:26:25.900469 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:26:25.900475 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:26:25.900481 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:26:25.900488 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:26:25.900494 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:26:25.900500 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:26:25.900506 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:26:25.900512 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:26:25.900519 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:26:25.900527 kernel: ACPI: Interpreter enabled Jan 17 12:26:25.900533 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:26:25.900539 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:26:25.900554 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:26:25.900560 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:26:25.900566 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:26:25.900572 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:26:25.900731 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:26:25.900851 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 12:26:25.900958 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 12:26:25.900967 kernel: PCI host bridge to bus 0000:00 Jan 17 12:26:25.901092 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:26:25.901970 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:26:25.902078 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:26:25.903204 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 17 12:26:25.903325 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:26:25.903424 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 17 12:26:25.903519 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:26:25.903664 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:26:25.903782 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:26:25.903888 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 17 12:26:25.903992 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 17 12:26:25.904101 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 17 12:26:25.905338 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 17 12:26:25.905455 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:26:25.905586 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 12:26:25.905693 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 17 12:26:25.905804 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 12:26:25.905913 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 17 12:26:25.906023 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 12:26:25.906126 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 17 12:26:25.907273 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 12:26:25.907386 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 17 12:26:25.907499 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 12:26:25.907622 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 17 12:26:25.907734 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 12:26:25.907836 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 17 12:26:25.907944 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 12:26:25.908049 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 17 12:26:25.908161 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 12:26:25.910405 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 17 12:26:25.910530 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 17 12:26:25.910650 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 17 12:26:25.910762 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:26:25.910864 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:26:25.910973 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:26:25.911074 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 17 12:26:25.911225 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 17 12:26:25.911342 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:26:25.911445 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 12:26:25.911574 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 12:26:25.911683 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 17 12:26:25.911790 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 17 12:26:25.911902 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 17 12:26:25.912007 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 12:26:25.912110 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 12:26:25.912231 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 17 12:26:25.912351 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 12:26:25.912459 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 17 12:26:25.912577 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 12:26:25.912687 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 12:26:25.912790 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 12:26:25.912910 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 17 12:26:25.913018 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 17 12:26:25.913125 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 17 12:26:25.913785 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 12:26:25.913898 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 12:26:25.914008 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 12:26:25.914124 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 17 12:26:25.914258 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 17 12:26:25.914364 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 12:26:25.914466 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 12:26:25.914583 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 12:26:25.914704 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 12:26:25.914822 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 17 12:26:25.914930 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 17 12:26:25.915036 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 12:26:25.915139 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 12:26:25.915320 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 12:26:25.915441 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 17 12:26:25.915560 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 17 12:26:25.915675 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 17 12:26:25.915778 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 12:26:25.915881 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 12:26:25.915982 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 12:26:25.915991 kernel: acpiphp: Slot [0] registered Jan 17 12:26:25.916106 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 12:26:25.916232 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 17 12:26:25.916344 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 17 12:26:25.916459 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 17 12:26:25.916609 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 12:26:25.916717 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 12:26:25.916820 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 12:26:25.916829 kernel: acpiphp: Slot [0-2] registered Jan 17 12:26:25.916929 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 12:26:25.917032 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 17 12:26:25.917135 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 12:26:25.917148 kernel: acpiphp: Slot [0-3] registered Jan 17 12:26:25.917268 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 12:26:25.917374 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 12:26:25.917476 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 12:26:25.917485 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:26:25.917491 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:26:25.917498 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:26:25.917504 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:26:25.917510 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:26:25.917521 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:26:25.917527 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:26:25.917533 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:26:25.917550 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:26:25.917557 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:26:25.917563 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:26:25.917569 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:26:25.917575 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:26:25.917582 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:26:25.917590 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:26:25.917596 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:26:25.917603 kernel: iommu: Default domain type: Translated Jan 17 12:26:25.917609 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:26:25.917615 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:26:25.917621 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:26:25.917628 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:26:25.917634 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 17 12:26:25.917739 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:26:25.917847 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:26:25.917949 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:26:25.917958 kernel: vgaarb: loaded Jan 17 12:26:25.917965 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:26:25.917971 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:26:25.917977 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:26:25.917984 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:26:25.917990 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:26:25.918000 kernel: pnp: PnP ACPI init Jan 17 12:26:25.918114 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:26:25.918124 kernel: pnp: PnP ACPI: found 5 devices Jan 17 12:26:25.918130 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:26:25.918136 kernel: NET: Registered PF_INET protocol family Jan 17 12:26:25.918143 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:26:25.918149 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:26:25.918156 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:26:25.918162 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:26:25.918171 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:26:25.918192 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:26:25.918199 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:26:25.918205 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:26:25.918212 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:26:25.918218 kernel: NET: Registered PF_XDP protocol family Jan 17 12:26:25.918326 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 12:26:25.918430 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 12:26:25.918538 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 12:26:25.918658 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 12:26:25.918760 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 12:26:25.918864 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 12:26:25.918966 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 12:26:25.919068 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 12:26:25.919171 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 17 12:26:25.919330 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 12:26:25.919433 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 12:26:25.919534 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 12:26:25.919652 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 12:26:25.919754 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 12:26:25.919856 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 12:26:25.919958 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 12:26:25.920066 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 12:26:25.920200 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 12:26:25.920307 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 12:26:25.920410 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 12:26:25.920513 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 12:26:25.920630 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 12:26:25.920733 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 12:26:25.920836 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 12:26:25.920938 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 12:26:25.921040 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 17 12:26:25.921149 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 12:26:25.921269 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 12:26:25.921373 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 12:26:25.921475 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 17 12:26:25.921590 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 17 12:26:25.921694 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 12:26:25.921801 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 12:26:25.921904 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 17 12:26:25.922007 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 12:26:25.922111 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 12:26:25.922232 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:26:25.922336 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:26:25.922431 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:26:25.922525 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 17 12:26:25.922632 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:26:25.922728 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 17 12:26:25.922839 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 17 12:26:25.922939 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 17 12:26:25.923050 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 17 12:26:25.923150 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 12:26:25.923363 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 17 12:26:25.923465 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 12:26:25.923590 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 17 12:26:25.923691 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 12:26:25.923802 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 17 12:26:25.923900 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 12:26:25.924005 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 17 12:26:25.924103 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 12:26:25.924308 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 17 12:26:25.924412 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 17 12:26:25.924516 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 12:26:25.924637 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 17 12:26:25.924737 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 17 12:26:25.924834 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 12:26:25.924938 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 17 12:26:25.925035 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 17 12:26:25.925132 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 12:26:25.925145 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:26:25.925152 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:26:25.925159 kernel: Initialise system trusted keyrings Jan 17 12:26:25.925165 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:26:25.925172 kernel: Key type asymmetric registered Jan 17 12:26:25.925192 kernel: Asymmetric key parser 'x509' registered Jan 17 12:26:25.925199 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:26:25.925205 kernel: io scheduler mq-deadline registered Jan 17 12:26:25.925212 kernel: io scheduler kyber registered Jan 17 12:26:25.925221 kernel: io scheduler bfq registered Jan 17 12:26:25.925331 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 17 12:26:25.925435 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 17 12:26:25.925539 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 17 12:26:25.925661 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 17 12:26:25.925764 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 17 12:26:25.925868 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 17 12:26:25.925970 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 17 12:26:25.926077 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 17 12:26:25.926219 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 17 12:26:25.926328 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 17 12:26:25.926430 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 17 12:26:25.926532 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 17 12:26:25.926650 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 17 12:26:25.926752 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 17 12:26:25.926853 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 17 12:26:25.926953 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 17 12:26:25.926967 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:26:25.927067 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 17 12:26:25.927169 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 17 12:26:25.927195 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:26:25.927203 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 17 12:26:25.927210 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:26:25.927217 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:26:25.927223 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:26:25.927232 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:26:25.927239 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:26:25.927246 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:26:25.927358 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 12:26:25.927457 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 12:26:25.927568 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T12:26:25 UTC (1737116785) Jan 17 12:26:25.927666 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 12:26:25.927675 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:26:25.927685 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:26:25.927692 kernel: Segment Routing with IPv6 Jan 17 12:26:25.927698 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:26:25.927705 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:26:25.927711 kernel: Key type dns_resolver registered Jan 17 12:26:25.927718 kernel: IPI shorthand broadcast: enabled Jan 17 12:26:25.927724 kernel: sched_clock: Marking stable (1125007044, 132643766)->(1265522029, -7871219) Jan 17 12:26:25.927731 kernel: registered taskstats version 1 Jan 17 12:26:25.927737 kernel: Loading compiled-in X.509 certificates Jan 17 12:26:25.927746 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:26:25.927753 kernel: Key type .fscrypt registered Jan 17 12:26:25.927759 kernel: Key type fscrypt-provisioning registered Jan 17 12:26:25.927766 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:26:25.927772 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:26:25.927779 kernel: ima: No architecture policies found Jan 17 12:26:25.927785 kernel: clk: Disabling unused clocks Jan 17 12:26:25.927792 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:26:25.927799 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:26:25.927807 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:26:25.927814 kernel: Run /init as init process Jan 17 12:26:25.927820 kernel: with arguments: Jan 17 12:26:25.927827 kernel: /init Jan 17 12:26:25.927834 kernel: with environment: Jan 17 12:26:25.927841 kernel: HOME=/ Jan 17 12:26:25.927847 kernel: TERM=linux Jan 17 12:26:25.927853 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:26:25.927862 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:26:25.927873 systemd[1]: Detected virtualization kvm. Jan 17 12:26:25.927880 systemd[1]: Detected architecture x86-64. Jan 17 12:26:25.927887 systemd[1]: Running in initrd. Jan 17 12:26:25.927893 systemd[1]: No hostname configured, using default hostname. Jan 17 12:26:25.927900 systemd[1]: Hostname set to . Jan 17 12:26:25.927907 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:26:25.927914 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:26:25.927923 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:26:25.927930 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:26:25.927937 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:26:25.927944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:26:25.927951 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:26:25.927959 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:26:25.927967 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:26:25.927976 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:26:25.927983 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:26:25.927990 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:26:25.927997 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:26:25.928004 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:26:25.928011 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:26:25.928018 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:26:25.928025 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:26:25.928034 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:26:25.928041 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:26:25.928048 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:26:25.928055 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:26:25.928062 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:26:25.928069 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:26:25.928076 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:26:25.928083 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:26:25.928090 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:26:25.928099 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:26:25.928106 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:26:25.928113 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:26:25.928119 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:26:25.928126 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:26:25.928133 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:26:25.928160 systemd-journald[187]: Collecting audit messages is disabled. Jan 17 12:26:25.928194 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:26:25.928202 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:26:25.928209 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:26:25.928219 systemd-journald[187]: Journal started Jan 17 12:26:25.928235 systemd-journald[187]: Runtime Journal (/run/log/journal/3cb86386e7c1426181bc57fae3d7f064) is 4.8M, max 38.4M, 33.6M free. Jan 17 12:26:25.899352 systemd-modules-load[188]: Inserted module 'overlay' Jan 17 12:26:25.960226 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:26:25.960253 kernel: Bridge firewalling registered Jan 17 12:26:25.935007 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 17 12:26:25.970326 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:26:25.970373 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:26:25.971727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:26:25.973644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:26:25.979307 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:26:25.982308 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:26:25.983476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:26:25.986502 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:26:25.999095 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:26:26.000392 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:26:26.004841 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:26:26.013298 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:26:26.013903 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:26:26.017303 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:26:26.023623 dracut-cmdline[219]: dracut-dracut-053 Jan 17 12:26:26.027657 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:26:26.047685 systemd-resolved[221]: Positive Trust Anchors: Jan 17 12:26:26.047699 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:26:26.047725 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:26:26.053222 systemd-resolved[221]: Defaulting to hostname 'linux'. Jan 17 12:26:26.054197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:26:26.054911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:26:26.096222 kernel: SCSI subsystem initialized Jan 17 12:26:26.104204 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:26:26.114207 kernel: iscsi: registered transport (tcp) Jan 17 12:26:26.131310 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:26:26.131354 kernel: QLogic iSCSI HBA Driver Jan 17 12:26:26.172216 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:26:26.177312 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:26:26.199346 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:26:26.199386 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:26:26.200493 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:26:26.239213 kernel: raid6: avx2x4 gen() 30777 MB/s Jan 17 12:26:26.256211 kernel: raid6: avx2x2 gen() 25848 MB/s Jan 17 12:26:26.273335 kernel: raid6: avx2x1 gen() 20761 MB/s Jan 17 12:26:26.273378 kernel: raid6: using algorithm avx2x4 gen() 30777 MB/s Jan 17 12:26:26.292210 kernel: raid6: .... xor() 4306 MB/s, rmw enabled Jan 17 12:26:26.292262 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:26:26.311217 kernel: xor: automatically using best checksumming function avx Jan 17 12:26:26.441214 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:26:26.452452 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:26:26.458298 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:26:26.476157 systemd-udevd[405]: Using default interface naming scheme 'v255'. Jan 17 12:26:26.481477 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:26:26.491295 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:26:26.502450 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 17 12:26:26.531307 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:26:26.539344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:26:26.604652 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:26:26.611332 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:26:26.627657 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:26:26.628780 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:26:26.630373 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:26:26.631796 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:26:26.639704 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:26:26.659023 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:26:26.726250 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:26:26.733203 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:26:26.733268 kernel: libata version 3.00 loaded. Jan 17 12:26:26.739628 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 17 12:26:26.739686 kernel: ACPI: bus type USB registered Jan 17 12:26:26.739333 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:26:26.739445 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:26:26.740169 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:26:26.746251 kernel: usbcore: registered new interface driver usbfs Jan 17 12:26:26.746268 kernel: usbcore: registered new interface driver hub Jan 17 12:26:26.742667 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:26:26.742787 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:26:26.745474 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:26:26.756324 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:26:26.756344 kernel: AES CTR mode by8 optimization enabled Jan 17 12:26:26.756359 kernel: usbcore: registered new device driver usb Jan 17 12:26:26.755619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:26:26.790195 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:26:26.812941 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:26:26.812956 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:26:26.813096 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:26:26.815335 kernel: scsi host1: ahci Jan 17 12:26:26.815552 kernel: scsi host2: ahci Jan 17 12:26:26.815754 kernel: scsi host3: ahci Jan 17 12:26:26.815945 kernel: scsi host4: ahci Jan 17 12:26:26.816081 kernel: scsi host5: ahci Jan 17 12:26:26.816230 kernel: scsi host6: ahci Jan 17 12:26:26.816359 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jan 17 12:26:26.816375 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jan 17 12:26:26.816384 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jan 17 12:26:26.816392 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jan 17 12:26:26.816400 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jan 17 12:26:26.816408 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jan 17 12:26:26.816417 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 17 12:26:26.819684 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 17 12:26:26.819831 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:26:26.819967 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 17 12:26:26.820098 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:26:26.820247 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:26:26.820257 kernel: GPT:17805311 != 80003071 Jan 17 12:26:26.820265 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:26:26.820273 kernel: GPT:17805311 != 80003071 Jan 17 12:26:26.820281 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:26:26.820289 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:26:26.820297 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:26:26.856949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:26:26.862307 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:26:26.874191 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:26:27.132404 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:26:27.132483 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:26:27.132497 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 12:26:27.132508 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 12:26:27.132519 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:26:27.132529 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:26:27.133917 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 12:26:27.136257 kernel: ata1.00: applying bridge limits Jan 17 12:26:27.136285 kernel: ata1.00: configured for UDMA/100 Jan 17 12:26:27.140210 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 12:26:27.163226 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 12:26:27.195714 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 17 12:26:27.195864 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 12:26:27.195991 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 12:26:27.196113 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 17 12:26:27.196300 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 17 12:26:27.196483 kernel: hub 1-0:1.0: USB hub found Jan 17 12:26:27.196656 kernel: hub 1-0:1.0: 4 ports detected Jan 17 12:26:27.196785 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 12:26:27.196942 kernel: hub 2-0:1.0: USB hub found Jan 17 12:26:27.197152 kernel: hub 2-0:1.0: 4 ports detected Jan 17 12:26:27.197804 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 12:26:27.209941 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:26:27.209961 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (466) Jan 17 12:26:27.209971 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:26:27.213965 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 17 12:26:27.223214 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (467) Jan 17 12:26:27.228009 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 12:26:27.234084 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 17 12:26:27.239042 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 17 12:26:27.240373 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 17 12:26:27.246386 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:26:27.255206 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:26:27.255435 disk-uuid[577]: Primary Header is updated. Jan 17 12:26:27.255435 disk-uuid[577]: Secondary Entries is updated. Jan 17 12:26:27.255435 disk-uuid[577]: Secondary Header is updated. Jan 17 12:26:27.426228 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 12:26:27.564222 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:26:27.569545 kernel: usbcore: registered new interface driver usbhid Jan 17 12:26:27.569577 kernel: usbhid: USB HID core driver Jan 17 12:26:27.575731 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 17 12:26:27.575761 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 17 12:26:28.271216 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:26:28.271779 disk-uuid[578]: The operation has completed successfully. Jan 17 12:26:28.330594 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:26:28.330732 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:26:28.347332 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:26:28.351844 sh[598]: Success Jan 17 12:26:28.364367 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 12:26:28.407880 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:26:28.419268 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:26:28.421587 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:26:28.439811 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:26:28.439848 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:26:28.439859 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:26:28.442862 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:26:28.442890 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:26:28.452203 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:26:28.453325 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:26:28.454462 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:26:28.460301 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:26:28.464300 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:26:28.476467 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:26:28.476506 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:26:28.476518 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:26:28.481818 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:26:28.481844 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:26:28.491101 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:26:28.493214 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:26:28.498190 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:26:28.502387 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:26:28.556812 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:26:28.565352 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:26:28.586729 ignition[704]: Ignition 2.19.0 Jan 17 12:26:28.586741 ignition[704]: Stage: fetch-offline Jan 17 12:26:28.586782 ignition[704]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:26:28.586795 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:26:28.586878 ignition[704]: parsed url from cmdline: "" Jan 17 12:26:28.586882 ignition[704]: no config URL provided Jan 17 12:26:28.586886 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:26:28.586894 ignition[704]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:26:28.590938 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:26:28.586899 ignition[704]: failed to fetch config: resource requires networking Jan 17 12:26:28.587071 ignition[704]: Ignition finished successfully Jan 17 12:26:28.593798 systemd-networkd[780]: lo: Link UP Jan 17 12:26:28.593802 systemd-networkd[780]: lo: Gained carrier Jan 17 12:26:28.596587 systemd-networkd[780]: Enumeration completed Jan 17 12:26:28.596671 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:26:28.597649 systemd[1]: Reached target network.target - Network. Jan 17 12:26:28.597742 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:26:28.597746 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:26:28.598822 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:26:28.598825 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:26:28.599381 systemd-networkd[780]: eth0: Link UP Jan 17 12:26:28.599387 systemd-networkd[780]: eth0: Gained carrier Jan 17 12:26:28.599393 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:26:28.604463 systemd-networkd[780]: eth1: Link UP Jan 17 12:26:28.604469 systemd-networkd[780]: eth1: Gained carrier Jan 17 12:26:28.604478 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:26:28.605328 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:26:28.618152 ignition[787]: Ignition 2.19.0 Jan 17 12:26:28.618162 ignition[787]: Stage: fetch Jan 17 12:26:28.618338 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:26:28.618348 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:26:28.618435 ignition[787]: parsed url from cmdline: "" Jan 17 12:26:28.618438 ignition[787]: no config URL provided Jan 17 12:26:28.618443 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:26:28.618454 ignition[787]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:26:28.618470 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 17 12:26:28.618647 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 17 12:26:28.655230 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:26:28.671218 systemd-networkd[780]: eth0: DHCPv4 address 49.12.221.202/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 12:26:28.819291 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 17 12:26:28.825328 ignition[787]: GET result: OK Jan 17 12:26:28.825540 ignition[787]: parsing config with SHA512: 744115262c0189c3b7b8afa02fc9ec259ae9d87742f45d7b84be8e3506294da2dc09533051e557109355ef9930dee02a8d447659708064c92cb8b6ebba16b31b Jan 17 12:26:28.838659 unknown[787]: fetched base config from "system" Jan 17 12:26:28.838687 unknown[787]: fetched base config from "system" Jan 17 12:26:28.839473 ignition[787]: fetch: fetch complete Jan 17 12:26:28.838706 unknown[787]: fetched user config from "hetzner" Jan 17 12:26:28.839484 ignition[787]: fetch: fetch passed Jan 17 12:26:28.839589 ignition[787]: Ignition finished successfully Jan 17 12:26:28.845770 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:26:28.854458 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:26:28.884280 ignition[795]: Ignition 2.19.0 Jan 17 12:26:28.884297 ignition[795]: Stage: kargs Jan 17 12:26:28.884508 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:26:28.884541 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:26:28.887929 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:26:28.885637 ignition[795]: kargs: kargs passed Jan 17 12:26:28.885702 ignition[795]: Ignition finished successfully Jan 17 12:26:28.898467 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:26:28.916672 ignition[801]: Ignition 2.19.0 Jan 17 12:26:28.916687 ignition[801]: Stage: disks Jan 17 12:26:28.916862 ignition[801]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:26:28.916875 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:26:28.917790 ignition[801]: disks: disks passed Jan 17 12:26:28.920726 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:26:28.917850 ignition[801]: Ignition finished successfully Jan 17 12:26:28.922494 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:26:28.923711 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:26:28.924845 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:26:28.925951 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:26:28.927408 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:26:28.933280 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:26:28.947807 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:26:28.949861 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:26:28.955260 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:26:29.035456 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:26:29.035894 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:26:29.036840 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:26:29.043255 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:26:29.046269 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:26:29.048373 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:26:29.051779 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:26:29.052977 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:26:29.055825 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:26:29.057380 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (818) Jan 17 12:26:29.059081 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:26:29.059110 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:26:29.059134 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:26:29.065483 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:26:29.065506 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:26:29.067907 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:26:29.074965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:26:29.112548 coreos-metadata[820]: Jan 17 12:26:29.112 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 17 12:26:29.114122 coreos-metadata[820]: Jan 17 12:26:29.114 INFO Fetch successful Jan 17 12:26:29.115717 coreos-metadata[820]: Jan 17 12:26:29.114 INFO wrote hostname ci-4081-3-0-6-80d8e78ae3 to /sysroot/etc/hostname Jan 17 12:26:29.117199 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:26:29.118113 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:26:29.122698 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:26:29.126327 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:26:29.130542 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:26:29.214124 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:26:29.221300 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:26:29.225023 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:26:29.229209 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:26:29.251468 ignition[935]: INFO : Ignition 2.19.0 Jan 17 12:26:29.251468 ignition[935]: INFO : Stage: mount Jan 17 12:26:29.251468 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:26:29.251468 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:26:29.253940 ignition[935]: INFO : mount: mount passed Jan 17 12:26:29.253940 ignition[935]: INFO : Ignition finished successfully Jan 17 12:26:29.255485 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:26:29.261283 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:26:29.261964 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:26:29.439002 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:26:29.445625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:26:29.477247 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (946) Jan 17 12:26:29.483108 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:26:29.483166 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:26:29.487500 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:26:29.496649 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:26:29.496713 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:26:29.501510 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:26:29.545408 ignition[963]: INFO : Ignition 2.19.0 Jan 17 12:26:29.546833 ignition[963]: INFO : Stage: files Jan 17 12:26:29.546833 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:26:29.546833 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:26:29.550712 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:26:29.550712 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:26:29.550712 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:26:29.555477 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:26:29.557086 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:26:29.557086 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:26:29.556542 unknown[963]: wrote ssh authorized keys file for user: core Jan 17 12:26:29.561566 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:26:29.561566 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:26:29.561566 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:26:29.561566 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:26:29.635761 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:26:29.646369 systemd-networkd[780]: eth1: Gained IPv6LL Jan 17 12:26:30.010126 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:26:30.012442 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:26:30.033781 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:26:30.304494 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:26:30.478326 systemd-networkd[780]: eth0: Gained IPv6LL Jan 17 12:26:30.590056 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:26:30.590056 ignition[963]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 17 12:26:30.592636 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:26:30.603708 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:26:30.603708 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:26:30.603708 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:26:30.603708 ignition[963]: INFO : files: files passed Jan 17 12:26:30.603708 ignition[963]: INFO : Ignition finished successfully Jan 17 12:26:30.596218 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:26:30.606156 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:26:30.609854 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:26:30.611456 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:26:30.611865 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:26:30.621989 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:26:30.621989 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:26:30.624189 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:26:30.623773 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:26:30.625113 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:26:30.629294 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:26:30.650780 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:26:30.650912 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:26:30.652014 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:26:30.652954 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:26:30.654028 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:26:30.658289 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:26:30.669416 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:26:30.674323 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:26:30.683119 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:26:30.684321 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:26:30.684914 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:26:30.685980 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:26:30.686074 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:26:30.687226 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:26:30.687882 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:26:30.688876 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:26:30.689796 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:26:30.690711 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:26:30.691733 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:26:30.692757 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:26:30.693883 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:26:30.694874 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:26:30.696005 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:26:30.696995 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:26:30.697115 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:26:30.698484 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:26:30.699445 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:26:30.700461 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:26:30.700572 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:26:30.701589 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:26:30.701687 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:26:30.702934 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:26:30.703035 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:26:30.703718 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:26:30.703855 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:26:30.704781 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:26:30.704910 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:26:30.717107 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:26:30.717644 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:26:30.717798 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:26:30.721357 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:26:30.727706 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:26:30.727832 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:26:30.729992 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:26:30.730733 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:26:30.735393 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:26:30.736092 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:26:30.739542 ignition[1015]: INFO : Ignition 2.19.0 Jan 17 12:26:30.739542 ignition[1015]: INFO : Stage: umount Jan 17 12:26:30.739542 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:26:30.739542 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:26:30.739542 ignition[1015]: INFO : umount: umount passed Jan 17 12:26:30.739542 ignition[1015]: INFO : Ignition finished successfully Jan 17 12:26:30.739138 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:26:30.739251 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:26:30.744425 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:26:30.744523 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:26:30.746050 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:26:30.746096 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:26:30.747315 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:26:30.747360 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:26:30.749884 systemd[1]: Stopped target network.target - Network. Jan 17 12:26:30.751549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:26:30.751602 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:26:30.756646 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:26:30.758792 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:26:30.762226 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:26:30.763435 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:26:30.764768 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:26:30.770924 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:26:30.770998 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:26:30.771579 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:26:30.771631 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:26:30.774913 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:26:30.774971 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:26:30.775909 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:26:30.775955 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:26:30.777331 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:26:30.778350 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:26:30.780301 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:26:30.780881 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:26:30.780983 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:26:30.781230 systemd-networkd[780]: eth0: DHCPv6 lease lost Jan 17 12:26:30.782117 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:26:30.782230 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:26:30.786334 systemd-networkd[780]: eth1: DHCPv6 lease lost Jan 17 12:26:30.786399 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:26:30.786535 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:26:30.791258 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:26:30.791447 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:26:30.793629 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:26:30.793721 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:26:30.801334 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:26:30.801943 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:26:30.802010 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:26:30.802606 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:26:30.802652 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:26:30.803173 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:26:30.803236 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:26:30.804117 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:26:30.804161 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:26:30.805280 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:26:30.815614 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:26:30.815734 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:26:30.820787 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:26:30.820973 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:26:30.822020 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:26:30.822083 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:26:30.823163 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:26:30.823224 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:26:30.824304 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:26:30.824353 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:26:30.825899 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:26:30.825944 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:26:30.826952 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:26:30.826996 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:26:30.833370 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:26:30.834296 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:26:30.834350 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:26:30.834886 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:26:30.834932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:26:30.841570 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:26:30.841683 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:26:30.843650 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:26:30.848311 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:26:30.857737 systemd[1]: Switching root. Jan 17 12:26:30.891345 systemd-journald[187]: Journal stopped Jan 17 12:26:31.921974 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 17 12:26:31.922034 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:26:31.922047 kernel: SELinux: policy capability open_perms=1 Jan 17 12:26:31.922056 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:26:31.922073 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:26:31.922086 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:26:31.922099 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:26:31.922109 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:26:31.922118 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:26:31.922131 kernel: audit: type=1403 audit(1737116791.070:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:26:31.922142 systemd[1]: Successfully loaded SELinux policy in 46.540ms. Jan 17 12:26:31.922157 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.416ms. Jan 17 12:26:31.922172 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:26:31.922196 systemd[1]: Detected virtualization kvm. Jan 17 12:26:31.922206 systemd[1]: Detected architecture x86-64. Jan 17 12:26:31.922216 systemd[1]: Detected first boot. Jan 17 12:26:31.922226 systemd[1]: Hostname set to . Jan 17 12:26:31.922236 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:26:31.922246 zram_generator::config[1075]: No configuration found. Jan 17 12:26:31.922258 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:26:31.922271 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:26:31.922280 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 12:26:31.922291 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:26:31.922304 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:26:31.922314 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:26:31.922324 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:26:31.922334 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:26:31.922344 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:26:31.922362 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:26:31.922372 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:26:31.922382 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:26:31.922392 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:26:31.922402 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:26:31.922412 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:26:31.922427 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:26:31.922437 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:26:31.922447 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:26:31.922459 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:26:31.922469 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:26:31.922479 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:26:31.922489 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:26:31.922510 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:26:31.922520 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:26:31.922530 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:26:31.922543 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:26:31.922553 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:26:31.922564 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:26:31.922574 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:26:31.922584 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:26:31.922594 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:26:31.922606 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:26:31.922619 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:26:31.922631 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:26:31.922641 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:26:31.922652 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:26:31.922662 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:26:31.922671 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:26:31.922681 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:26:31.922691 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:26:31.922704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:26:31.922714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:26:31.922724 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:26:31.922734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:26:31.922744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:26:31.922754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:26:31.922764 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:26:31.922774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:26:31.922787 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:26:31.922798 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:26:31.922809 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:26:31.922819 kernel: fuse: init (API version 7.39) Jan 17 12:26:31.922828 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:26:31.922839 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:26:31.922848 kernel: ACPI: bus type drm_connector registered Jan 17 12:26:31.922858 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:26:31.922868 kernel: loop: module loaded Jan 17 12:26:31.922881 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:26:31.922891 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:26:31.922902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:26:31.922912 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:26:31.922923 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:26:31.922933 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:26:31.922942 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:26:31.922952 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:26:31.922962 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:26:31.922992 systemd-journald[1169]: Collecting audit messages is disabled. Jan 17 12:26:31.923016 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:26:31.923028 systemd-journald[1169]: Journal started Jan 17 12:26:31.923047 systemd-journald[1169]: Runtime Journal (/run/log/journal/3cb86386e7c1426181bc57fae3d7f064) is 4.8M, max 38.4M, 33.6M free. Jan 17 12:26:31.926535 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:26:31.926568 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:26:31.929161 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:26:31.929470 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:26:31.930264 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:26:31.930454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:26:31.931235 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:26:31.931422 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:26:31.932276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:26:31.932486 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:26:31.933808 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:26:31.933997 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:26:31.934764 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:26:31.934960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:26:31.936726 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:26:31.937566 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:26:31.938523 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:26:31.955025 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:26:31.961340 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:26:31.964739 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:26:31.965354 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:26:31.972519 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:26:31.976390 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:26:31.977013 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:26:31.980323 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:26:31.981892 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:26:31.997680 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:26:32.017007 systemd-journald[1169]: Time spent on flushing to /var/log/journal/3cb86386e7c1426181bc57fae3d7f064 is 16.819ms for 1119 entries. Jan 17 12:26:32.017007 systemd-journald[1169]: System Journal (/var/log/journal/3cb86386e7c1426181bc57fae3d7f064) is 8.0M, max 584.8M, 576.8M free. Jan 17 12:26:32.052442 systemd-journald[1169]: Received client request to flush runtime journal. Jan 17 12:26:32.008363 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:26:32.012469 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:26:32.014452 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:26:32.026691 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:26:32.031282 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:26:32.059725 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:26:32.064682 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:26:32.078896 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 17 12:26:32.079167 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 17 12:26:32.087080 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:26:32.094411 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:26:32.103566 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:26:32.115382 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:26:32.127272 udevadm[1237]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:26:32.143853 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:26:32.150433 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:26:32.167820 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 17 12:26:32.168113 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 17 12:26:32.172953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:26:32.539129 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:26:32.544366 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:26:32.574406 systemd-udevd[1246]: Using default interface naming scheme 'v255'. Jan 17 12:26:32.594871 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:26:32.604319 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:26:32.626371 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:26:32.652695 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:26:32.674124 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:26:32.731230 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:26:32.753228 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1260) Jan 17 12:26:32.771568 systemd-networkd[1250]: lo: Link UP Jan 17 12:26:32.771577 systemd-networkd[1250]: lo: Gained carrier Jan 17 12:26:32.776236 systemd-networkd[1250]: Enumeration completed Jan 17 12:26:32.776402 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:26:32.778805 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:26:32.779193 systemd-networkd[1250]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:26:32.784326 systemd-networkd[1250]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:26:32.784333 systemd-networkd[1250]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:26:32.784848 systemd-networkd[1250]: eth0: Link UP Jan 17 12:26:32.784852 systemd-networkd[1250]: eth0: Gained carrier Jan 17 12:26:32.784863 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:26:32.787445 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:26:32.788446 systemd-networkd[1250]: eth1: Link UP Jan 17 12:26:32.789407 systemd-networkd[1250]: eth1: Gained carrier Jan 17 12:26:32.789684 systemd-networkd[1250]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:26:32.795655 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 12:26:32.800215 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:26:32.826260 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:26:32.832029 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 17 12:26:32.832047 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Jan 17 12:26:32.837734 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:26:32.837860 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:26:32.844370 systemd-networkd[1250]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:26:32.845302 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:26:32.847347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:26:32.849303 systemd-networkd[1250]: eth0: DHCPv4 address 49.12.221.202/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 12:26:32.858289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:26:32.858787 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:26:32.858827 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:26:32.858859 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:26:32.860344 systemd-networkd[1250]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:26:32.862371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:26:32.862575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:26:32.867702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:26:32.867891 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:26:32.871316 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:26:32.876165 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:26:32.876780 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:26:32.879653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:26:32.888202 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 17 12:26:32.890198 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:26:32.892357 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:26:32.892549 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:26:32.895217 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 17 12:26:32.901337 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 12:26:32.901405 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:26:32.903362 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:26:32.903407 kernel: [drm] features: -context_init Jan 17 12:26:32.908207 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:26:32.916198 kernel: [drm] number of scanouts: 1 Jan 17 12:26:32.916228 kernel: [drm] number of cap sets: 0 Jan 17 12:26:32.922609 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 12:26:32.928208 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 17 12:26:32.935500 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:26:32.935535 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 12:26:32.944205 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:26:32.953512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:26:32.964830 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:26:32.965100 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:26:32.970125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:26:32.974950 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:26:32.975340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:26:32.981324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:26:33.031154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:26:33.120688 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:26:33.125439 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:26:33.145956 lvm[1314]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:26:33.178236 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:26:33.178558 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:26:33.185293 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:26:33.191628 lvm[1317]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:26:33.221333 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:26:33.225444 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:26:33.225689 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:26:33.225719 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:26:33.225924 systemd[1]: Reached target machines.target - Containers. Jan 17 12:26:33.227543 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:26:33.238455 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:26:33.240382 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:26:33.245383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:26:33.247249 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:26:33.249356 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:26:33.261390 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:26:33.266773 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:26:33.279416 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:26:33.292220 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:26:33.302120 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:26:33.303118 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:26:33.328211 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:26:33.346072 kernel: loop1: detected capacity change from 0 to 8 Jan 17 12:26:33.362217 kernel: loop2: detected capacity change from 0 to 211296 Jan 17 12:26:33.397352 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:26:33.437196 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:26:33.456770 kernel: loop5: detected capacity change from 0 to 8 Jan 17 12:26:33.459608 kernel: loop6: detected capacity change from 0 to 211296 Jan 17 12:26:33.475213 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 12:26:33.491465 (sd-merge)[1338]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 17 12:26:33.492036 (sd-merge)[1338]: Merged extensions into '/usr'. Jan 17 12:26:33.497575 systemd[1]: Reloading requested from client PID 1325 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:26:33.497678 systemd[1]: Reloading... Jan 17 12:26:33.570888 zram_generator::config[1366]: No configuration found. Jan 17 12:26:33.649344 ldconfig[1321]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:26:33.708659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:26:33.763315 systemd[1]: Reloading finished in 265 ms. Jan 17 12:26:33.779626 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:26:33.784622 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:26:33.794397 systemd[1]: Starting ensure-sysext.service... Jan 17 12:26:33.798336 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:26:33.807305 systemd[1]: Reloading requested from client PID 1416 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:26:33.807324 systemd[1]: Reloading... Jan 17 12:26:33.824586 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:26:33.824893 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:26:33.825793 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:26:33.826033 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. Jan 17 12:26:33.826107 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. Jan 17 12:26:33.829632 systemd-tmpfiles[1417]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:26:33.829646 systemd-tmpfiles[1417]: Skipping /boot Jan 17 12:26:33.842891 systemd-tmpfiles[1417]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:26:33.842903 systemd-tmpfiles[1417]: Skipping /boot Jan 17 12:26:33.887413 zram_generator::config[1447]: No configuration found. Jan 17 12:26:33.995672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:26:34.053463 systemd[1]: Reloading finished in 245 ms. Jan 17 12:26:34.073138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:26:34.091498 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:26:34.101318 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:26:34.109339 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:26:34.113662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:26:34.119310 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:26:34.131301 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:26:34.131453 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:26:34.140729 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:26:34.146753 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:26:34.150419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:26:34.155793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:26:34.156067 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:26:34.160816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:26:34.161021 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:26:34.168100 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:26:34.168404 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:26:34.171925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:26:34.172203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:26:34.184535 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:26:34.193793 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:26:34.194105 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:26:34.202447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:26:34.215865 augenrules[1531]: No rules Jan 17 12:26:34.216386 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:26:34.222585 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:26:34.239409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:26:34.240107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:26:34.240523 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:26:34.243284 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:26:34.248928 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:26:34.250092 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:26:34.253020 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:26:34.253995 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:26:34.255234 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:26:34.262703 systemd[1]: Finished ensure-sysext.service. Jan 17 12:26:34.268860 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:26:34.269055 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:26:34.272907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:26:34.273239 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:26:34.286727 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:26:34.287539 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:26:34.297369 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:26:34.303462 systemd-resolved[1506]: Positive Trust Anchors: Jan 17 12:26:34.303502 systemd-resolved[1506]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:26:34.303545 systemd-resolved[1506]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:26:34.307286 systemd-resolved[1506]: Using system hostname 'ci-4081-3-0-6-80d8e78ae3'. Jan 17 12:26:34.308648 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:26:34.309741 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:26:34.310578 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:26:34.312597 systemd[1]: Reached target network.target - Network. Jan 17 12:26:34.317354 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:26:34.317769 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:26:34.318366 systemd-networkd[1250]: eth1: Gained IPv6LL Jan 17 12:26:34.324456 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:26:34.328973 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:26:34.335326 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:26:34.376576 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:26:34.378843 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:26:34.379693 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:26:34.380291 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:26:34.380821 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:26:34.381655 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:26:34.381693 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:26:34.382262 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:26:34.383219 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:26:34.383801 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:26:34.384271 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:26:34.391320 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:26:34.395112 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:26:34.397571 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:26:34.404430 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:26:34.404953 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:26:34.405400 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:26:34.406015 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:26:34.406064 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:26:34.406097 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:26:34.408832 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:26:34.413308 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:26:34.417374 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:26:34.430334 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:26:34.434408 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:26:34.435998 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:26:34.442267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:26:34.446789 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:26:34.447409 jq[1568]: false Jan 17 12:26:34.460242 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:26:34.467276 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:26:34.486906 extend-filesystems[1569]: Found loop4 Jan 17 12:26:34.486906 extend-filesystems[1569]: Found loop5 Jan 17 12:26:34.486906 extend-filesystems[1569]: Found loop6 Jan 17 12:26:34.486906 extend-filesystems[1569]: Found loop7 Jan 17 12:26:34.486906 extend-filesystems[1569]: Found sda Jan 17 12:26:34.486906 extend-filesystems[1569]: Found sda1 Jan 17 12:26:34.486906 extend-filesystems[1569]: Found sda2 Jan 17 12:26:34.486906 extend-filesystems[1569]: Found sda3 Jan 17 12:26:34.486906 extend-filesystems[1569]: Found usr Jan 17 12:26:34.486906 extend-filesystems[1569]: Found sda4 Jan 17 12:26:34.486906 extend-filesystems[1569]: Found sda6 Jan 17 12:26:34.486906 extend-filesystems[1569]: Found sda7 Jan 17 12:26:34.486906 extend-filesystems[1569]: Found sda9 Jan 17 12:26:34.486906 extend-filesystems[1569]: Checking size of /dev/sda9 Jan 17 12:26:34.486331 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 17 12:26:34.552945 coreos-metadata[1565]: Jan 17 12:26:34.496 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 17 12:26:34.552945 coreos-metadata[1565]: Jan 17 12:26:34.497 INFO Fetch successful Jan 17 12:26:34.552945 coreos-metadata[1565]: Jan 17 12:26:34.499 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 17 12:26:34.552945 coreos-metadata[1565]: Jan 17 12:26:34.499 INFO Fetch successful Jan 17 12:26:34.504274 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:26:34.521296 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:26:34.531783 systemd-timesyncd[1553]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Jan 17 12:26:34.531835 systemd-timesyncd[1553]: Initial clock synchronization to Fri 2025-01-17 12:26:34.349351 UTC. Jan 17 12:26:34.543380 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:26:34.552206 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:26:34.562549 extend-filesystems[1569]: Resized partition /dev/sda9 Jan 17 12:26:34.566290 extend-filesystems[1603]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:26:34.564033 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:26:34.572906 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 17 12:26:34.578023 dbus-daemon[1567]: [system] SELinux support is enabled Jan 17 12:26:34.578254 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:26:34.584645 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:26:34.591644 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:26:34.591936 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:26:34.606257 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:26:34.606554 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:26:34.611524 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:26:34.611798 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:26:34.624225 jq[1607]: true Jan 17 12:26:34.632626 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:26:34.665882 update_engine[1602]: I20250117 12:26:34.665454 1602 main.cc:92] Flatcar Update Engine starting Jan 17 12:26:34.668546 (ntainerd)[1621]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:26:34.670135 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:26:34.674369 jq[1618]: true Jan 17 12:26:34.670162 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:26:34.672814 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:26:34.672837 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:26:34.706275 systemd-networkd[1250]: eth0: Gained IPv6LL Jan 17 12:26:34.715846 update_engine[1602]: I20250117 12:26:34.715590 1602 update_check_scheduler.cc:74] Next update check in 6m5s Jan 17 12:26:34.713145 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:26:34.714236 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:26:34.724280 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:26:34.730924 tar[1614]: linux-amd64/helm Jan 17 12:26:34.754351 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1254) Jan 17 12:26:34.754385 sshd_keygen[1609]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:26:34.766627 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 17 12:26:34.783381 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:26:34.784271 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:26:34.794224 extend-filesystems[1603]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 12:26:34.794224 extend-filesystems[1603]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 17 12:26:34.794224 extend-filesystems[1603]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 17 12:26:34.805809 extend-filesystems[1569]: Resized filesystem in /dev/sda9 Jan 17 12:26:34.805809 extend-filesystems[1569]: Found sr0 Jan 17 12:26:34.795984 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:26:34.796420 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:26:34.826209 bash[1662]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:26:34.824994 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:26:34.848636 systemd[1]: Starting sshkeys.service... Jan 17 12:26:34.851106 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:26:34.888561 systemd-logind[1596]: New seat seat0. Jan 17 12:26:34.896328 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:26:34.913033 systemd-logind[1596]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 12:26:34.913067 systemd-logind[1596]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:26:34.913933 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:26:34.929453 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:26:34.941166 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:26:34.962715 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:26:34.962994 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:26:34.978486 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:26:34.996207 coreos-metadata[1682]: Jan 17 12:26:34.994 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 17 12:26:34.997394 locksmithd[1640]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:26:35.004033 coreos-metadata[1682]: Jan 17 12:26:35.001 INFO Fetch successful Jan 17 12:26:35.009582 unknown[1682]: wrote ssh authorized keys file for user: core Jan 17 12:26:35.015432 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:26:35.029519 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:26:35.042637 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:26:35.044961 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:26:35.072496 update-ssh-keys[1700]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:26:35.073214 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:26:35.080328 systemd[1]: Finished sshkeys.service. Jan 17 12:26:35.081878 containerd[1621]: time="2025-01-17T12:26:35.081805730Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:26:35.103643 containerd[1621]: time="2025-01-17T12:26:35.103446239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:26:35.105233 containerd[1621]: time="2025-01-17T12:26:35.105190778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:26:35.105654 containerd[1621]: time="2025-01-17T12:26:35.105308880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:26:35.105654 containerd[1621]: time="2025-01-17T12:26:35.105330938Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:26:35.105654 containerd[1621]: time="2025-01-17T12:26:35.105484276Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:26:35.105654 containerd[1621]: time="2025-01-17T12:26:35.105503338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:26:35.105654 containerd[1621]: time="2025-01-17T12:26:35.105566555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:26:35.105654 containerd[1621]: time="2025-01-17T12:26:35.105584520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:26:35.105964 containerd[1621]: time="2025-01-17T12:26:35.105946512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:26:35.106052 containerd[1621]: time="2025-01-17T12:26:35.106038767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:26:35.106110 containerd[1621]: time="2025-01-17T12:26:35.106096492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:26:35.106166 containerd[1621]: time="2025-01-17T12:26:35.106144034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:26:35.106643 containerd[1621]: time="2025-01-17T12:26:35.106299036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:26:35.106643 containerd[1621]: time="2025-01-17T12:26:35.106541086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:26:35.106872 containerd[1621]: time="2025-01-17T12:26:35.106854937Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:26:35.106919 containerd[1621]: time="2025-01-17T12:26:35.106908579Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:26:35.107076 containerd[1621]: time="2025-01-17T12:26:35.107058647Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:26:35.107217 containerd[1621]: time="2025-01-17T12:26:35.107201392Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:26:35.113963 containerd[1621]: time="2025-01-17T12:26:35.113928170Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:26:35.114033 containerd[1621]: time="2025-01-17T12:26:35.113991876Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:26:35.114033 containerd[1621]: time="2025-01-17T12:26:35.114010821Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:26:35.114033 containerd[1621]: time="2025-01-17T12:26:35.114024048Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:26:35.114124 containerd[1621]: time="2025-01-17T12:26:35.114035933Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:26:35.114309 containerd[1621]: time="2025-01-17T12:26:35.114214991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:26:35.114535 containerd[1621]: time="2025-01-17T12:26:35.114492275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:26:35.114717 containerd[1621]: time="2025-01-17T12:26:35.114597591Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:26:35.114717 containerd[1621]: time="2025-01-17T12:26:35.114617153Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:26:35.114717 containerd[1621]: time="2025-01-17T12:26:35.114628598Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:26:35.114717 containerd[1621]: time="2025-01-17T12:26:35.114660867Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:26:35.114717 containerd[1621]: time="2025-01-17T12:26:35.114673262Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:26:35.114717 containerd[1621]: time="2025-01-17T12:26:35.114684491Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:26:35.114717 containerd[1621]: time="2025-01-17T12:26:35.114696837Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:26:35.114717 containerd[1621]: time="2025-01-17T12:26:35.114710064Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:26:35.114861 containerd[1621]: time="2025-01-17T12:26:35.114722077Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:26:35.114861 containerd[1621]: time="2025-01-17T12:26:35.114741874Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:26:35.114861 containerd[1621]: time="2025-01-17T12:26:35.114753886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:26:35.114861 containerd[1621]: time="2025-01-17T12:26:35.114770619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.114861 containerd[1621]: time="2025-01-17T12:26:35.114782582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.114861 containerd[1621]: time="2025-01-17T12:26:35.114805247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.114861 containerd[1621]: time="2025-01-17T12:26:35.114818228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.114861 containerd[1621]: time="2025-01-17T12:26:35.114828900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.114861 containerd[1621]: time="2025-01-17T12:26:35.114839905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.114861 containerd[1621]: time="2025-01-17T12:26:35.114849715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.114861 containerd[1621]: time="2025-01-17T12:26:35.114860327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.114876766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.114889542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.114899392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.114913128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.114924122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.114937173Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.114954198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.114963401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.114972733Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.115006450Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.115022194Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.115031279Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:26:35.115051 containerd[1621]: time="2025-01-17T12:26:35.115041618Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:26:35.115273 containerd[1621]: time="2025-01-17T12:26:35.115050732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.115273 containerd[1621]: time="2025-01-17T12:26:35.115065476Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:26:35.115273 containerd[1621]: time="2025-01-17T12:26:35.115080280Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:26:35.115273 containerd[1621]: time="2025-01-17T12:26:35.115090001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:26:35.116894 containerd[1621]: time="2025-01-17T12:26:35.115517844Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:26:35.116894 containerd[1621]: time="2025-01-17T12:26:35.115580601Z" level=info msg="Connect containerd service" Jan 17 12:26:35.116894 containerd[1621]: time="2025-01-17T12:26:35.115625382Z" level=info msg="using legacy CRI server" Jan 17 12:26:35.116894 containerd[1621]: time="2025-01-17T12:26:35.115636582Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:26:35.116894 containerd[1621]: time="2025-01-17T12:26:35.115783029Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:26:35.116894 containerd[1621]: time="2025-01-17T12:26:35.116571461Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:26:35.117891 containerd[1621]: time="2025-01-17T12:26:35.117497315Z" level=info msg="Start subscribing containerd event" Jan 17 12:26:35.117891 containerd[1621]: time="2025-01-17T12:26:35.117544671Z" level=info msg="Start recovering state" Jan 17 12:26:35.117891 containerd[1621]: time="2025-01-17T12:26:35.117611569Z" level=info msg="Start event monitor" Jan 17 12:26:35.117891 containerd[1621]: time="2025-01-17T12:26:35.117626911Z" level=info msg="Start snapshots syncer" Jan 17 12:26:35.117891 containerd[1621]: time="2025-01-17T12:26:35.117635800Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:26:35.117891 containerd[1621]: time="2025-01-17T12:26:35.117645444Z" level=info msg="Start streaming server" Jan 17 12:26:35.118391 containerd[1621]: time="2025-01-17T12:26:35.118373746Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:26:35.118485 containerd[1621]: time="2025-01-17T12:26:35.118471895Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:26:35.120320 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:26:35.121759 containerd[1621]: time="2025-01-17T12:26:35.121740110Z" level=info msg="containerd successfully booted in 0.041627s" Jan 17 12:26:35.395418 tar[1614]: linux-amd64/LICENSE Jan 17 12:26:35.395418 tar[1614]: linux-amd64/README.md Jan 17 12:26:35.413047 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:26:35.700758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:26:35.707320 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:26:35.708971 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:26:35.716034 systemd[1]: Startup finished in 6.617s (kernel) + 4.691s (userspace) = 11.308s. Jan 17 12:26:36.358781 kubelet[1725]: E0117 12:26:36.358686 1725 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:26:36.362953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:26:36.363291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:26:46.613642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:26:46.619596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:26:46.752365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:26:46.754288 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:26:46.793782 kubelet[1749]: E0117 12:26:46.793714 1749 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:26:46.800352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:26:46.800570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:26:56.856494 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:26:56.865346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:26:56.996343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:26:57.000990 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:26:57.048386 kubelet[1771]: E0117 12:26:57.048309 1771 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:26:57.054018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:26:57.054280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:27:07.106358 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:27:07.119364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:27:07.256798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:27:07.270628 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:27:07.315766 kubelet[1793]: E0117 12:27:07.315693 1793 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:27:07.320283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:27:07.320523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:27:17.356333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 12:27:17.364340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:27:17.497587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:27:17.501814 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:27:17.546985 kubelet[1814]: E0117 12:27:17.546914 1814 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:27:17.551557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:27:17.551792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:27:20.438434 update_engine[1602]: I20250117 12:27:20.438314 1602 update_attempter.cc:509] Updating boot flags... Jan 17 12:27:20.492214 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1833) Jan 17 12:27:20.536290 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1834) Jan 17 12:27:20.585339 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1834) Jan 17 12:27:27.606397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 12:27:27.612634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:27:27.769400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:27:27.771616 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:27:27.815265 kubelet[1857]: E0117 12:27:27.815201 1857 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:27:27.819629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:27:27.819997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:27:37.856993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 12:27:37.865399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:27:38.035358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:27:38.050751 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:27:38.104262 kubelet[1878]: E0117 12:27:38.104168 1878 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:27:38.107986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:27:38.108211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:27:48.356341 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 17 12:27:48.361343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:27:48.507304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:27:48.507547 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:27:48.544030 kubelet[1899]: E0117 12:27:48.543904 1899 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:27:48.547464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:27:48.548313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:27:58.606733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 17 12:27:58.613495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:27:58.778299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:27:58.778695 (kubelet)[1921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:27:58.820925 kubelet[1921]: E0117 12:27:58.820849 1921 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:27:58.826155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:27:58.826506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:08.856159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 17 12:28:08.861651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:08.994386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:08.997243 (kubelet)[1942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:28:09.042606 kubelet[1942]: E0117 12:28:09.042534 1942 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:28:09.047314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:28:09.047557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:19.106394 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 17 12:28:19.113658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:19.254358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:19.258871 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:28:19.300071 kubelet[1963]: E0117 12:28:19.299996 1963 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:28:19.306327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:28:19.306656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:29.356310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 17 12:28:29.373447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:29.519038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:29.523666 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:28:29.572306 kubelet[1984]: E0117 12:28:29.572247 1984 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:28:29.576925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:28:29.577171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:30.251828 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:28:30.256745 systemd[1]: Started sshd@0-49.12.221.202:22-139.178.89.65:49106.service - OpenSSH per-connection server daemon (139.178.89.65:49106). Jan 17 12:28:31.236693 sshd[1992]: Accepted publickey for core from 139.178.89.65 port 49106 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:28:31.239257 sshd[1992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:28:31.248954 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:28:31.254471 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:28:31.257995 systemd-logind[1596]: New session 1 of user core. Jan 17 12:28:31.273513 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:28:31.280547 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:28:31.288978 (systemd)[1998]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:28:31.392924 systemd[1998]: Queued start job for default target default.target. Jan 17 12:28:31.393343 systemd[1998]: Created slice app.slice - User Application Slice. Jan 17 12:28:31.393361 systemd[1998]: Reached target paths.target - Paths. Jan 17 12:28:31.393373 systemd[1998]: Reached target timers.target - Timers. Jan 17 12:28:31.398267 systemd[1998]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:28:31.409126 systemd[1998]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:28:31.409229 systemd[1998]: Reached target sockets.target - Sockets. Jan 17 12:28:31.409248 systemd[1998]: Reached target basic.target - Basic System. Jan 17 12:28:31.409300 systemd[1998]: Reached target default.target - Main User Target. Jan 17 12:28:31.409341 systemd[1998]: Startup finished in 113ms. Jan 17 12:28:31.409532 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:28:31.420504 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:28:32.104431 systemd[1]: Started sshd@1-49.12.221.202:22-139.178.89.65:60418.service - OpenSSH per-connection server daemon (139.178.89.65:60418). Jan 17 12:28:33.077100 sshd[2010]: Accepted publickey for core from 139.178.89.65 port 60418 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:28:33.078841 sshd[2010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:28:33.083891 systemd-logind[1596]: New session 2 of user core. Jan 17 12:28:33.090454 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:28:33.757949 sshd[2010]: pam_unix(sshd:session): session closed for user core Jan 17 12:28:33.762965 systemd[1]: sshd@1-49.12.221.202:22-139.178.89.65:60418.service: Deactivated successfully. Jan 17 12:28:33.764227 systemd-logind[1596]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:28:33.766274 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:28:33.767146 systemd-logind[1596]: Removed session 2. Jan 17 12:28:33.932469 systemd[1]: Started sshd@2-49.12.221.202:22-139.178.89.65:60426.service - OpenSSH per-connection server daemon (139.178.89.65:60426). Jan 17 12:28:34.901997 sshd[2018]: Accepted publickey for core from 139.178.89.65 port 60426 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:28:34.903820 sshd[2018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:28:34.908524 systemd-logind[1596]: New session 3 of user core. Jan 17 12:28:34.913491 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:28:35.577896 sshd[2018]: pam_unix(sshd:session): session closed for user core Jan 17 12:28:35.582199 systemd[1]: sshd@2-49.12.221.202:22-139.178.89.65:60426.service: Deactivated successfully. Jan 17 12:28:35.586853 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:28:35.587050 systemd-logind[1596]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:28:35.588610 systemd-logind[1596]: Removed session 3. Jan 17 12:28:35.755483 systemd[1]: Started sshd@3-49.12.221.202:22-139.178.89.65:60434.service - OpenSSH per-connection server daemon (139.178.89.65:60434). Jan 17 12:28:36.736083 sshd[2026]: Accepted publickey for core from 139.178.89.65 port 60434 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:28:36.737928 sshd[2026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:28:36.743348 systemd-logind[1596]: New session 4 of user core. Jan 17 12:28:36.753507 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:28:37.422748 sshd[2026]: pam_unix(sshd:session): session closed for user core Jan 17 12:28:37.426200 systemd[1]: sshd@3-49.12.221.202:22-139.178.89.65:60434.service: Deactivated successfully. Jan 17 12:28:37.430416 systemd-logind[1596]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:28:37.430636 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:28:37.432512 systemd-logind[1596]: Removed session 4. Jan 17 12:28:37.586431 systemd[1]: Started sshd@4-49.12.221.202:22-139.178.89.65:60436.service - OpenSSH per-connection server daemon (139.178.89.65:60436). Jan 17 12:28:38.559524 sshd[2034]: Accepted publickey for core from 139.178.89.65 port 60436 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:28:38.561508 sshd[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:28:38.566820 systemd-logind[1596]: New session 5 of user core. Jan 17 12:28:38.576635 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:28:39.090686 sudo[2038]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:28:39.091065 sudo[2038]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:28:39.111789 sudo[2038]: pam_unix(sudo:session): session closed for user root Jan 17 12:28:39.271080 sshd[2034]: pam_unix(sshd:session): session closed for user core Jan 17 12:28:39.277248 systemd[1]: sshd@4-49.12.221.202:22-139.178.89.65:60436.service: Deactivated successfully. Jan 17 12:28:39.280465 systemd-logind[1596]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:28:39.281162 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:28:39.282428 systemd-logind[1596]: Removed session 5. Jan 17 12:28:39.434516 systemd[1]: Started sshd@5-49.12.221.202:22-139.178.89.65:60440.service - OpenSSH per-connection server daemon (139.178.89.65:60440). Jan 17 12:28:39.606259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 17 12:28:39.611406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:39.763385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:39.774818 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:28:39.818374 kubelet[2057]: E0117 12:28:39.818247 2057 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:28:39.822925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:28:39.823191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:40.408929 sshd[2043]: Accepted publickey for core from 139.178.89.65 port 60440 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:28:40.411622 sshd[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:28:40.419960 systemd-logind[1596]: New session 6 of user core. Jan 17 12:28:40.432756 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:28:40.931326 sudo[2069]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:28:40.931758 sudo[2069]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:28:40.936003 sudo[2069]: pam_unix(sudo:session): session closed for user root Jan 17 12:28:40.943154 sudo[2068]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:28:40.943593 sudo[2068]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:28:40.962925 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:28:40.964351 auditctl[2072]: No rules Jan 17 12:28:40.964715 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:28:40.964979 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:28:40.972440 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:28:40.997935 augenrules[2091]: No rules Jan 17 12:28:40.998734 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:28:41.000914 sudo[2068]: pam_unix(sudo:session): session closed for user root Jan 17 12:28:41.160107 sshd[2043]: pam_unix(sshd:session): session closed for user core Jan 17 12:28:41.163720 systemd[1]: sshd@5-49.12.221.202:22-139.178.89.65:60440.service: Deactivated successfully. Jan 17 12:28:41.169265 systemd-logind[1596]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:28:41.170249 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:28:41.171383 systemd-logind[1596]: Removed session 6. Jan 17 12:28:41.328677 systemd[1]: Started sshd@6-49.12.221.202:22-139.178.89.65:40236.service - OpenSSH per-connection server daemon (139.178.89.65:40236). Jan 17 12:28:42.299094 sshd[2100]: Accepted publickey for core from 139.178.89.65 port 40236 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:28:42.301005 sshd[2100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:28:42.306308 systemd-logind[1596]: New session 7 of user core. Jan 17 12:28:42.321652 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:28:42.820081 sudo[2104]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:28:42.820524 sudo[2104]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:28:43.091401 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:28:43.091619 (dockerd)[2121]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:28:43.332358 dockerd[2121]: time="2025-01-17T12:28:43.332291355Z" level=info msg="Starting up" Jan 17 12:28:43.427223 systemd[1]: var-lib-docker-metacopy\x2dcheck1974335849-merged.mount: Deactivated successfully. Jan 17 12:28:43.453231 dockerd[2121]: time="2025-01-17T12:28:43.452864735Z" level=info msg="Loading containers: start." Jan 17 12:28:43.574200 kernel: Initializing XFRM netlink socket Jan 17 12:28:43.650890 systemd-networkd[1250]: docker0: Link UP Jan 17 12:28:43.668621 dockerd[2121]: time="2025-01-17T12:28:43.668575916Z" level=info msg="Loading containers: done." Jan 17 12:28:43.686225 dockerd[2121]: time="2025-01-17T12:28:43.686021584Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:28:43.686225 dockerd[2121]: time="2025-01-17T12:28:43.686125038Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:28:43.686411 dockerd[2121]: time="2025-01-17T12:28:43.686245785Z" level=info msg="Daemon has completed initialization" Jan 17 12:28:43.712963 dockerd[2121]: time="2025-01-17T12:28:43.712901550Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:28:43.713245 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:28:44.805045 containerd[1621]: time="2025-01-17T12:28:44.804796133Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:28:45.386343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814265775.mount: Deactivated successfully. Jan 17 12:28:46.351314 containerd[1621]: time="2025-01-17T12:28:46.351271599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:46.352330 containerd[1621]: time="2025-01-17T12:28:46.352167207Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140822" Jan 17 12:28:46.353115 containerd[1621]: time="2025-01-17T12:28:46.353073716Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:46.355532 containerd[1621]: time="2025-01-17T12:28:46.355487570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:46.356537 containerd[1621]: time="2025-01-17T12:28:46.356405649Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 1.551575613s" Jan 17 12:28:46.356537 containerd[1621]: time="2025-01-17T12:28:46.356434584Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:28:46.374588 containerd[1621]: time="2025-01-17T12:28:46.374534762Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:28:47.693749 containerd[1621]: time="2025-01-17T12:28:47.693653228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:47.695260 containerd[1621]: time="2025-01-17T12:28:47.695191500Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216661" Jan 17 12:28:47.696565 containerd[1621]: time="2025-01-17T12:28:47.696516784Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:47.699167 containerd[1621]: time="2025-01-17T12:28:47.699124951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:47.700472 containerd[1621]: time="2025-01-17T12:28:47.700031691Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 1.325322303s" Jan 17 12:28:47.700472 containerd[1621]: time="2025-01-17T12:28:47.700069602Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:28:47.721326 containerd[1621]: time="2025-01-17T12:28:47.721292533Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:28:48.691613 containerd[1621]: time="2025-01-17T12:28:48.691539793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:48.692426 containerd[1621]: time="2025-01-17T12:28:48.692378345Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332861" Jan 17 12:28:48.693247 containerd[1621]: time="2025-01-17T12:28:48.693195526Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:48.695537 containerd[1621]: time="2025-01-17T12:28:48.695475970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:48.696479 containerd[1621]: time="2025-01-17T12:28:48.696347203Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 974.882226ms" Jan 17 12:28:48.696479 containerd[1621]: time="2025-01-17T12:28:48.696377389Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:28:48.716151 containerd[1621]: time="2025-01-17T12:28:48.716113455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:28:49.655009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234509178.mount: Deactivated successfully. Jan 17 12:28:49.856508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 17 12:28:49.863321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:49.955970 containerd[1621]: time="2025-01-17T12:28:49.955734561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:49.956791 containerd[1621]: time="2025-01-17T12:28:49.956754794Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620967" Jan 17 12:28:49.957331 containerd[1621]: time="2025-01-17T12:28:49.957304013Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:49.959654 containerd[1621]: time="2025-01-17T12:28:49.959625975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:49.960431 containerd[1621]: time="2025-01-17T12:28:49.960403742Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.244026242s" Jan 17 12:28:49.960470 containerd[1621]: time="2025-01-17T12:28:49.960434750Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:28:49.985840 containerd[1621]: time="2025-01-17T12:28:49.983096531Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:28:50.010405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:50.017939 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:28:50.058769 kubelet[2366]: E0117 12:28:50.058718 2366 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:28:50.063467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:28:50.063706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:50.538764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2525679274.mount: Deactivated successfully. Jan 17 12:28:51.164768 containerd[1621]: time="2025-01-17T12:28:51.164698296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:51.165839 containerd[1621]: time="2025-01-17T12:28:51.165790934Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Jan 17 12:28:51.166794 containerd[1621]: time="2025-01-17T12:28:51.166728862Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:51.169355 containerd[1621]: time="2025-01-17T12:28:51.169310901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:51.170839 containerd[1621]: time="2025-01-17T12:28:51.170281881Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.187152628s" Jan 17 12:28:51.170839 containerd[1621]: time="2025-01-17T12:28:51.170321746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:28:51.191357 containerd[1621]: time="2025-01-17T12:28:51.191331593Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:28:51.697369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913411154.mount: Deactivated successfully. Jan 17 12:28:51.702099 containerd[1621]: time="2025-01-17T12:28:51.701271393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:51.702099 containerd[1621]: time="2025-01-17T12:28:51.702060763Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Jan 17 12:28:51.702879 containerd[1621]: time="2025-01-17T12:28:51.702839753Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:51.705074 containerd[1621]: time="2025-01-17T12:28:51.705028235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:51.706637 containerd[1621]: time="2025-01-17T12:28:51.705918383Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 514.510096ms" Jan 17 12:28:51.706637 containerd[1621]: time="2025-01-17T12:28:51.705959700Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:28:51.729605 containerd[1621]: time="2025-01-17T12:28:51.729564631Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:28:52.267924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149428271.mount: Deactivated successfully. Jan 17 12:28:53.609310 containerd[1621]: time="2025-01-17T12:28:53.609233510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:53.610669 containerd[1621]: time="2025-01-17T12:28:53.610629647Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651705" Jan 17 12:28:53.611908 containerd[1621]: time="2025-01-17T12:28:53.611856627Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:53.615542 containerd[1621]: time="2025-01-17T12:28:53.615130593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:28:53.616416 containerd[1621]: time="2025-01-17T12:28:53.616381719Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 1.886552961s" Jan 17 12:28:53.616464 containerd[1621]: time="2025-01-17T12:28:53.616421924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:28:56.507626 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:56.519385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:56.541474 systemd[1]: Reloading requested from client PID 2547 ('systemctl') (unit session-7.scope)... Jan 17 12:28:56.541490 systemd[1]: Reloading... Jan 17 12:28:56.659320 zram_generator::config[2590]: No configuration found. Jan 17 12:28:56.757700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:28:56.818855 systemd[1]: Reloading finished in 276 ms. Jan 17 12:28:56.867736 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:28:56.867826 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:28:56.868150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:56.870492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:57.004730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:57.014587 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:28:57.055262 kubelet[2654]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:28:57.055262 kubelet[2654]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:28:57.055262 kubelet[2654]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:28:57.056466 kubelet[2654]: I0117 12:28:57.056415 2654 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:28:57.214278 kubelet[2654]: I0117 12:28:57.214240 2654 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:28:57.214278 kubelet[2654]: I0117 12:28:57.214269 2654 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:28:57.214465 kubelet[2654]: I0117 12:28:57.214445 2654 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:28:57.240275 kubelet[2654]: I0117 12:28:57.239832 2654 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:28:57.240275 kubelet[2654]: E0117 12:28:57.240246 2654 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://49.12.221.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:57.251376 kubelet[2654]: I0117 12:28:57.251331 2654 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:28:57.251733 kubelet[2654]: I0117 12:28:57.251710 2654 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:28:57.252825 kubelet[2654]: I0117 12:28:57.252596 2654 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:28:57.252825 kubelet[2654]: I0117 12:28:57.252825 2654 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:28:57.252966 kubelet[2654]: I0117 12:28:57.252835 2654 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:28:57.252966 kubelet[2654]: I0117 12:28:57.252933 2654 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:28:57.253366 kubelet[2654]: I0117 12:28:57.253019 2654 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:28:57.253366 kubelet[2654]: I0117 12:28:57.253034 2654 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:28:57.254983 kubelet[2654]: I0117 12:28:57.253561 2654 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:28:57.254983 kubelet[2654]: I0117 12:28:57.253579 2654 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:28:57.254983 kubelet[2654]: W0117 12:28:57.253559 2654 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://49.12.221.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6-80d8e78ae3&limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:57.254983 kubelet[2654]: E0117 12:28:57.253605 2654 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.12.221.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6-80d8e78ae3&limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:57.254983 kubelet[2654]: I0117 12:28:57.254885 2654 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:28:57.256598 kubelet[2654]: W0117 12:28:57.256344 2654 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://49.12.221.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:57.256598 kubelet[2654]: E0117 12:28:57.256386 2654 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.12.221.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:57.259948 kubelet[2654]: I0117 12:28:57.259846 2654 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:28:57.264188 kubelet[2654]: W0117 12:28:57.263233 2654 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:28:57.264188 kubelet[2654]: I0117 12:28:57.263837 2654 server.go:1256] "Started kubelet" Jan 17 12:28:57.267989 kubelet[2654]: I0117 12:28:57.267920 2654 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:28:57.269221 kubelet[2654]: I0117 12:28:57.269202 2654 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:28:57.270123 kubelet[2654]: I0117 12:28:57.270110 2654 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:28:57.270485 kubelet[2654]: I0117 12:28:57.270470 2654 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:28:57.272037 kubelet[2654]: I0117 12:28:57.272011 2654 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:28:57.274495 kubelet[2654]: E0117 12:28:57.274042 2654 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://49.12.221.202:6443/api/v1/namespaces/default/events\": dial tcp 49.12.221.202:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-6-80d8e78ae3.181b7aa69aec6ba6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-6-80d8e78ae3,UID:ci-4081-3-0-6-80d8e78ae3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-6-80d8e78ae3,},FirstTimestamp:2025-01-17 12:28:57.263803302 +0000 UTC m=+0.245124513,LastTimestamp:2025-01-17 12:28:57.263803302 +0000 UTC m=+0.245124513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-6-80d8e78ae3,}" Jan 17 12:28:57.278587 kubelet[2654]: I0117 12:28:57.278565 2654 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:28:57.278658 kubelet[2654]: I0117 12:28:57.278642 2654 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:28:57.278715 kubelet[2654]: I0117 12:28:57.278690 2654 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:28:57.278967 kubelet[2654]: W0117 12:28:57.278933 2654 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://49.12.221.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:57.279006 kubelet[2654]: E0117 12:28:57.278976 2654 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.12.221.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:57.279402 kubelet[2654]: E0117 12:28:57.279382 2654 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:28:57.279898 kubelet[2654]: E0117 12:28:57.279601 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.221.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6-80d8e78ae3?timeout=10s\": dial tcp 49.12.221.202:6443: connect: connection refused" interval="200ms" Jan 17 12:28:57.280270 kubelet[2654]: I0117 12:28:57.280241 2654 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:28:57.280956 kubelet[2654]: I0117 12:28:57.280316 2654 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:28:57.282510 kubelet[2654]: I0117 12:28:57.282498 2654 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:28:57.293308 kubelet[2654]: I0117 12:28:57.293258 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:28:57.294597 kubelet[2654]: I0117 12:28:57.294303 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:28:57.294597 kubelet[2654]: I0117 12:28:57.294326 2654 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:28:57.294597 kubelet[2654]: I0117 12:28:57.294344 2654 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:28:57.294597 kubelet[2654]: E0117 12:28:57.294403 2654 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:28:57.298847 kubelet[2654]: W0117 12:28:57.298724 2654 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://49.12.221.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:57.298847 kubelet[2654]: E0117 12:28:57.298761 2654 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.12.221.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:57.307778 kubelet[2654]: I0117 12:28:57.307750 2654 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:28:57.307778 kubelet[2654]: I0117 12:28:57.307770 2654 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:28:57.307848 kubelet[2654]: I0117 12:28:57.307790 2654 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:28:57.309983 kubelet[2654]: I0117 12:28:57.309947 2654 policy_none.go:49] "None policy: Start" Jan 17 12:28:57.310809 kubelet[2654]: I0117 12:28:57.310789 2654 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:28:57.311280 kubelet[2654]: I0117 12:28:57.310912 2654 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:28:57.320197 kubelet[2654]: I0117 12:28:57.318916 2654 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:28:57.320197 kubelet[2654]: I0117 12:28:57.319159 2654 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:28:57.322489 kubelet[2654]: E0117 12:28:57.322382 2654 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-6-80d8e78ae3\" not found" Jan 17 12:28:57.380842 kubelet[2654]: I0117 12:28:57.380802 2654 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.381244 kubelet[2654]: E0117 12:28:57.381210 2654 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.12.221.202:6443/api/v1/nodes\": dial tcp 49.12.221.202:6443: connect: connection refused" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.395421 kubelet[2654]: I0117 12:28:57.395372 2654 topology_manager.go:215] "Topology Admit Handler" podUID="73062473de15b36a4112abcd9acedba3" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.397104 kubelet[2654]: I0117 12:28:57.397068 2654 topology_manager.go:215] "Topology Admit Handler" podUID="c123124fd709d61a3a980b382f7726da" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.398505 kubelet[2654]: I0117 12:28:57.398489 2654 topology_manager.go:215] "Topology Admit Handler" podUID="dd4bea6ed3a06b70467396141c6dd7b2" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.480880 kubelet[2654]: E0117 12:28:57.480812 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.221.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6-80d8e78ae3?timeout=10s\": dial tcp 49.12.221.202:6443: connect: connection refused" interval="400ms" Jan 17 12:28:57.580441 kubelet[2654]: I0117 12:28:57.580244 2654 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c123124fd709d61a3a980b382f7726da-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-6-80d8e78ae3\" (UID: \"c123124fd709d61a3a980b382f7726da\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.580441 kubelet[2654]: I0117 12:28:57.580304 2654 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c123124fd709d61a3a980b382f7726da-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-6-80d8e78ae3\" (UID: \"c123124fd709d61a3a980b382f7726da\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.580441 kubelet[2654]: I0117 12:28:57.580362 2654 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c123124fd709d61a3a980b382f7726da-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-6-80d8e78ae3\" (UID: \"c123124fd709d61a3a980b382f7726da\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.580441 kubelet[2654]: I0117 12:28:57.580403 2654 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd4bea6ed3a06b70467396141c6dd7b2-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-6-80d8e78ae3\" (UID: \"dd4bea6ed3a06b70467396141c6dd7b2\") " pod="kube-system/kube-scheduler-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.580441 kubelet[2654]: I0117 12:28:57.580426 2654 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73062473de15b36a4112abcd9acedba3-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-6-80d8e78ae3\" (UID: \"73062473de15b36a4112abcd9acedba3\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.580711 kubelet[2654]: I0117 12:28:57.580449 2654 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73062473de15b36a4112abcd9acedba3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-6-80d8e78ae3\" (UID: \"73062473de15b36a4112abcd9acedba3\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.580711 kubelet[2654]: I0117 12:28:57.580471 2654 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c123124fd709d61a3a980b382f7726da-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-6-80d8e78ae3\" (UID: \"c123124fd709d61a3a980b382f7726da\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.580711 kubelet[2654]: I0117 12:28:57.580492 2654 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c123124fd709d61a3a980b382f7726da-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-6-80d8e78ae3\" (UID: \"c123124fd709d61a3a980b382f7726da\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.580711 kubelet[2654]: I0117 12:28:57.580510 2654 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73062473de15b36a4112abcd9acedba3-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-6-80d8e78ae3\" (UID: \"73062473de15b36a4112abcd9acedba3\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.584348 kubelet[2654]: I0117 12:28:57.584290 2654 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.584615 kubelet[2654]: E0117 12:28:57.584590 2654 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.12.221.202:6443/api/v1/nodes\": dial tcp 49.12.221.202:6443: connect: connection refused" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.703194 containerd[1621]: time="2025-01-17T12:28:57.703090379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-6-80d8e78ae3,Uid:73062473de15b36a4112abcd9acedba3,Namespace:kube-system,Attempt:0,}" Jan 17 12:28:57.706285 containerd[1621]: time="2025-01-17T12:28:57.706231417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-6-80d8e78ae3,Uid:c123124fd709d61a3a980b382f7726da,Namespace:kube-system,Attempt:0,}" Jan 17 12:28:57.711898 containerd[1621]: time="2025-01-17T12:28:57.711847534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-6-80d8e78ae3,Uid:dd4bea6ed3a06b70467396141c6dd7b2,Namespace:kube-system,Attempt:0,}" Jan 17 12:28:57.881743 kubelet[2654]: E0117 12:28:57.881706 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.221.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6-80d8e78ae3?timeout=10s\": dial tcp 49.12.221.202:6443: connect: connection refused" interval="800ms" Jan 17 12:28:57.987207 kubelet[2654]: I0117 12:28:57.987165 2654 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:57.987644 kubelet[2654]: E0117 12:28:57.987612 2654 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.12.221.202:6443/api/v1/nodes\": dial tcp 49.12.221.202:6443: connect: connection refused" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:58.178133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260555398.mount: Deactivated successfully. Jan 17 12:28:58.184442 containerd[1621]: time="2025-01-17T12:28:58.184390017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:28:58.185313 containerd[1621]: time="2025-01-17T12:28:58.185276019Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:28:58.186105 containerd[1621]: time="2025-01-17T12:28:58.186044839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:28:58.186874 containerd[1621]: time="2025-01-17T12:28:58.186831805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:28:58.187788 containerd[1621]: time="2025-01-17T12:28:58.187747080Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:28:58.188571 containerd[1621]: time="2025-01-17T12:28:58.188532693Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:28:58.189425 containerd[1621]: time="2025-01-17T12:28:58.189390682Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 17 12:28:58.191447 containerd[1621]: time="2025-01-17T12:28:58.191407131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:28:58.196208 containerd[1621]: time="2025-01-17T12:28:58.195813421Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.592308ms" Jan 17 12:28:58.196474 containerd[1621]: time="2025-01-17T12:28:58.196452529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.54349ms" Jan 17 12:28:58.208598 containerd[1621]: time="2025-01-17T12:28:58.208568081Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.279317ms" Jan 17 12:28:58.316358 containerd[1621]: time="2025-01-17T12:28:58.316152967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:28:58.316358 containerd[1621]: time="2025-01-17T12:28:58.316260488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:28:58.316358 containerd[1621]: time="2025-01-17T12:28:58.316283451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:28:58.316627 containerd[1621]: time="2025-01-17T12:28:58.316583234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:28:58.322976 containerd[1621]: time="2025-01-17T12:28:58.322662840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:28:58.323036 containerd[1621]: time="2025-01-17T12:28:58.322834181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:28:58.323476 containerd[1621]: time="2025-01-17T12:28:58.323144313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:28:58.323476 containerd[1621]: time="2025-01-17T12:28:58.323304073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:28:58.331242 containerd[1621]: time="2025-01-17T12:28:58.330472219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:28:58.331242 containerd[1621]: time="2025-01-17T12:28:58.330510512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:28:58.331242 containerd[1621]: time="2025-01-17T12:28:58.330519959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:28:58.331242 containerd[1621]: time="2025-01-17T12:28:58.331105316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:28:58.407396 containerd[1621]: time="2025-01-17T12:28:58.407283889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-6-80d8e78ae3,Uid:73062473de15b36a4112abcd9acedba3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ce53b63320b6e6163fc8608dec697932a0c53c644cff296f20de20f84642b71\"" Jan 17 12:28:58.413524 containerd[1621]: time="2025-01-17T12:28:58.413457692Z" level=info msg="CreateContainer within sandbox \"2ce53b63320b6e6163fc8608dec697932a0c53c644cff296f20de20f84642b71\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:28:58.417912 containerd[1621]: time="2025-01-17T12:28:58.416584443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-6-80d8e78ae3,Uid:c123124fd709d61a3a980b382f7726da,Namespace:kube-system,Attempt:0,} returns sandbox id \"de50c6bc7136efd810d0388f822ddd88e4df68be6519890097493159c92ed947\"" Jan 17 12:28:58.420861 containerd[1621]: time="2025-01-17T12:28:58.420829781Z" level=info msg="CreateContainer within sandbox \"de50c6bc7136efd810d0388f822ddd88e4df68be6519890097493159c92ed947\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:28:58.422460 containerd[1621]: time="2025-01-17T12:28:58.422433448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-6-80d8e78ae3,Uid:dd4bea6ed3a06b70467396141c6dd7b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c0cb5d0ce5b0896c69f414318bc7dae01476b6633f3f2bdb9cfe0f6c9568f14\"" Jan 17 12:28:58.426370 containerd[1621]: time="2025-01-17T12:28:58.426341814Z" level=info msg="CreateContainer within sandbox \"3c0cb5d0ce5b0896c69f414318bc7dae01476b6633f3f2bdb9cfe0f6c9568f14\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:28:58.434589 containerd[1621]: time="2025-01-17T12:28:58.433977398Z" level=info msg="CreateContainer within sandbox \"2ce53b63320b6e6163fc8608dec697932a0c53c644cff296f20de20f84642b71\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6739cca2d61ca9b242c82ff638c1f05051f20cd9b654580dcd7e368c8f9cc360\"" Jan 17 12:28:58.435302 containerd[1621]: time="2025-01-17T12:28:58.435229045Z" level=info msg="StartContainer for \"6739cca2d61ca9b242c82ff638c1f05051f20cd9b654580dcd7e368c8f9cc360\"" Jan 17 12:28:58.438998 containerd[1621]: time="2025-01-17T12:28:58.438931444Z" level=info msg="CreateContainer within sandbox \"de50c6bc7136efd810d0388f822ddd88e4df68be6519890097493159c92ed947\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c6f6d30785e4e20bf6700c2ab224ca5cf6d071a406995d61d31906a140f3accb\"" Jan 17 12:28:58.440262 containerd[1621]: time="2025-01-17T12:28:58.439420502Z" level=info msg="StartContainer for \"c6f6d30785e4e20bf6700c2ab224ca5cf6d071a406995d61d31906a140f3accb\"" Jan 17 12:28:58.443597 containerd[1621]: time="2025-01-17T12:28:58.443575991Z" level=info msg="CreateContainer within sandbox \"3c0cb5d0ce5b0896c69f414318bc7dae01476b6633f3f2bdb9cfe0f6c9568f14\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8bd9fcdb152885295b03ce6590be21db2444ce9b0d41aeb041548fced712af82\"" Jan 17 12:28:58.444661 containerd[1621]: time="2025-01-17T12:28:58.444525010Z" level=info msg="StartContainer for \"8bd9fcdb152885295b03ce6590be21db2444ce9b0d41aeb041548fced712af82\"" Jan 17 12:28:58.501605 kubelet[2654]: W0117 12:28:58.501534 2654 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://49.12.221.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6-80d8e78ae3&limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:58.501605 kubelet[2654]: E0117 12:28:58.501607 2654 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.12.221.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6-80d8e78ae3&limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:58.539944 containerd[1621]: time="2025-01-17T12:28:58.539892153Z" level=info msg="StartContainer for \"c6f6d30785e4e20bf6700c2ab224ca5cf6d071a406995d61d31906a140f3accb\" returns successfully" Jan 17 12:28:58.555099 containerd[1621]: time="2025-01-17T12:28:58.552890518Z" level=info msg="StartContainer for \"6739cca2d61ca9b242c82ff638c1f05051f20cd9b654580dcd7e368c8f9cc360\" returns successfully" Jan 17 12:28:58.560300 kubelet[2654]: W0117 12:28:58.559923 2654 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://49.12.221.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:58.560300 kubelet[2654]: E0117 12:28:58.560163 2654 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.12.221.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:58.571204 containerd[1621]: time="2025-01-17T12:28:58.570968267Z" level=info msg="StartContainer for \"8bd9fcdb152885295b03ce6590be21db2444ce9b0d41aeb041548fced712af82\" returns successfully" Jan 17 12:28:58.682687 kubelet[2654]: E0117 12:28:58.682653 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.221.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6-80d8e78ae3?timeout=10s\": dial tcp 49.12.221.202:6443: connect: connection refused" interval="1.6s" Jan 17 12:28:58.749089 kubelet[2654]: W0117 12:28:58.747842 2654 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://49.12.221.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:58.749089 kubelet[2654]: E0117 12:28:58.747921 2654 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.12.221.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:58.791742 kubelet[2654]: I0117 12:28:58.791374 2654 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:58.791742 kubelet[2654]: E0117 12:28:58.791640 2654 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.12.221.202:6443/api/v1/nodes\": dial tcp 49.12.221.202:6443: connect: connection refused" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:28:58.858299 kubelet[2654]: W0117 12:28:58.855425 2654 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://49.12.221.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:28:58.858499 kubelet[2654]: E0117 12:28:58.858391 2654 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.12.221.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.221.202:6443: connect: connection refused Jan 17 12:29:00.254441 kubelet[2654]: E0117 12:29:00.254393 2654 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-3-0-6-80d8e78ae3" not found Jan 17 12:29:00.287745 kubelet[2654]: E0117 12:29:00.287711 2654 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-6-80d8e78ae3\" not found" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:00.394812 kubelet[2654]: I0117 12:29:00.394701 2654 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:00.410456 kubelet[2654]: I0117 12:29:00.410398 2654 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:00.422908 kubelet[2654]: E0117 12:29:00.422861 2654 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-6-80d8e78ae3\" not found" Jan 17 12:29:00.523808 kubelet[2654]: E0117 12:29:00.523636 2654 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-6-80d8e78ae3\" not found" Jan 17 12:29:00.624169 kubelet[2654]: E0117 12:29:00.624125 2654 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-6-80d8e78ae3\" not found" Jan 17 12:29:00.724269 kubelet[2654]: E0117 12:29:00.724219 2654 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-6-80d8e78ae3\" not found" Jan 17 12:29:00.825363 kubelet[2654]: E0117 12:29:00.825212 2654 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-6-80d8e78ae3\" not found" Jan 17 12:29:00.925885 kubelet[2654]: E0117 12:29:00.925835 2654 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-6-80d8e78ae3\" not found" Jan 17 12:29:01.257737 kubelet[2654]: I0117 12:29:01.257458 2654 apiserver.go:52] "Watching apiserver" Jan 17 12:29:01.280323 kubelet[2654]: I0117 12:29:01.279550 2654 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:29:02.619822 systemd[1]: Reloading requested from client PID 2927 ('systemctl') (unit session-7.scope)... Jan 17 12:29:02.619842 systemd[1]: Reloading... Jan 17 12:29:02.690215 zram_generator::config[2967]: No configuration found. Jan 17 12:29:02.799602 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:29:02.867860 systemd[1]: Reloading finished in 247 ms. Jan 17 12:29:02.906645 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:29:02.907502 kubelet[2654]: I0117 12:29:02.906635 2654 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:29:02.928363 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:29:02.928771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:02.934443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:29:03.076877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:03.081606 (kubelet)[3028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:29:03.138854 kubelet[3028]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:29:03.138854 kubelet[3028]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:29:03.138854 kubelet[3028]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:29:03.138854 kubelet[3028]: I0117 12:29:03.138773 3028 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:29:03.144543 kubelet[3028]: I0117 12:29:03.143288 3028 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:29:03.144543 kubelet[3028]: I0117 12:29:03.143307 3028 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:29:03.144543 kubelet[3028]: I0117 12:29:03.143481 3028 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:29:03.149272 kubelet[3028]: I0117 12:29:03.149167 3028 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:29:03.152581 kubelet[3028]: I0117 12:29:03.152561 3028 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:29:03.158569 kubelet[3028]: I0117 12:29:03.158443 3028 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:29:03.159927 kubelet[3028]: I0117 12:29:03.159359 3028 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:29:03.159927 kubelet[3028]: I0117 12:29:03.159507 3028 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:29:03.159927 kubelet[3028]: I0117 12:29:03.159530 3028 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:29:03.159927 kubelet[3028]: I0117 12:29:03.159538 3028 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:29:03.159927 kubelet[3028]: I0117 12:29:03.159567 3028 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:29:03.159927 kubelet[3028]: I0117 12:29:03.159657 3028 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:29:03.160170 kubelet[3028]: I0117 12:29:03.159670 3028 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:29:03.160170 kubelet[3028]: I0117 12:29:03.159691 3028 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:29:03.160629 kubelet[3028]: I0117 12:29:03.160304 3028 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:29:03.165197 kubelet[3028]: I0117 12:29:03.164293 3028 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:29:03.165197 kubelet[3028]: I0117 12:29:03.164440 3028 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:29:03.165197 kubelet[3028]: I0117 12:29:03.164751 3028 server.go:1256] "Started kubelet" Jan 17 12:29:03.166125 kubelet[3028]: I0117 12:29:03.166102 3028 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:29:03.175751 kubelet[3028]: I0117 12:29:03.175730 3028 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:29:03.177354 kubelet[3028]: I0117 12:29:03.177335 3028 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:29:03.178100 kubelet[3028]: I0117 12:29:03.178066 3028 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:29:03.183162 kubelet[3028]: I0117 12:29:03.183141 3028 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:29:03.184337 kubelet[3028]: I0117 12:29:03.184311 3028 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:29:03.184623 kubelet[3028]: I0117 12:29:03.184558 3028 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:29:03.186038 kubelet[3028]: I0117 12:29:03.186011 3028 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:29:03.205901 kubelet[3028]: E0117 12:29:03.205883 3028 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:29:03.208478 kubelet[3028]: I0117 12:29:03.208269 3028 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:29:03.208478 kubelet[3028]: I0117 12:29:03.208416 3028 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:29:03.208736 kubelet[3028]: I0117 12:29:03.208666 3028 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:29:03.213463 kubelet[3028]: I0117 12:29:03.213411 3028 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:29:03.216201 kubelet[3028]: I0117 12:29:03.215715 3028 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:29:03.216201 kubelet[3028]: I0117 12:29:03.215738 3028 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:29:03.216201 kubelet[3028]: I0117 12:29:03.215768 3028 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:29:03.216201 kubelet[3028]: E0117 12:29:03.215806 3028 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:29:03.277752 kubelet[3028]: I0117 12:29:03.277716 3028 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:29:03.278024 kubelet[3028]: I0117 12:29:03.278013 3028 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:29:03.278169 kubelet[3028]: I0117 12:29:03.278159 3028 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:29:03.278480 kubelet[3028]: I0117 12:29:03.278416 3028 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:29:03.278480 kubelet[3028]: I0117 12:29:03.278441 3028 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:29:03.278480 kubelet[3028]: I0117 12:29:03.278448 3028 policy_none.go:49] "None policy: Start" Jan 17 12:29:03.281201 kubelet[3028]: I0117 12:29:03.279402 3028 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:29:03.281201 kubelet[3028]: I0117 12:29:03.279424 3028 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:29:03.281201 kubelet[3028]: I0117 12:29:03.279559 3028 state_mem.go:75] "Updated machine memory state" Jan 17 12:29:03.281201 kubelet[3028]: I0117 12:29:03.281010 3028 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:29:03.281532 kubelet[3028]: I0117 12:29:03.281508 3028 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:29:03.288113 kubelet[3028]: I0117 12:29:03.288099 3028 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.297098 kubelet[3028]: I0117 12:29:03.297060 3028 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.298147 kubelet[3028]: I0117 12:29:03.298134 3028 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.316759 kubelet[3028]: I0117 12:29:03.316733 3028 topology_manager.go:215] "Topology Admit Handler" podUID="dd4bea6ed3a06b70467396141c6dd7b2" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.317009 kubelet[3028]: I0117 12:29:03.316992 3028 topology_manager.go:215] "Topology Admit Handler" podUID="73062473de15b36a4112abcd9acedba3" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.317163 kubelet[3028]: I0117 12:29:03.317149 3028 topology_manager.go:215] "Topology Admit Handler" podUID="c123124fd709d61a3a980b382f7726da" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.325463 kubelet[3028]: E0117 12:29:03.325325 3028 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-6-80d8e78ae3\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.387727 kubelet[3028]: I0117 12:29:03.387416 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c123124fd709d61a3a980b382f7726da-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-6-80d8e78ae3\" (UID: \"c123124fd709d61a3a980b382f7726da\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.387727 kubelet[3028]: I0117 12:29:03.387465 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73062473de15b36a4112abcd9acedba3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-6-80d8e78ae3\" (UID: \"73062473de15b36a4112abcd9acedba3\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.387727 kubelet[3028]: I0117 12:29:03.387483 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c123124fd709d61a3a980b382f7726da-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-6-80d8e78ae3\" (UID: \"c123124fd709d61a3a980b382f7726da\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.387727 kubelet[3028]: I0117 12:29:03.387501 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73062473de15b36a4112abcd9acedba3-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-6-80d8e78ae3\" (UID: \"73062473de15b36a4112abcd9acedba3\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.387727 kubelet[3028]: I0117 12:29:03.387519 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c123124fd709d61a3a980b382f7726da-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-6-80d8e78ae3\" (UID: \"c123124fd709d61a3a980b382f7726da\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.387928 kubelet[3028]: I0117 12:29:03.387539 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c123124fd709d61a3a980b382f7726da-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-6-80d8e78ae3\" (UID: \"c123124fd709d61a3a980b382f7726da\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.387928 kubelet[3028]: I0117 12:29:03.387560 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c123124fd709d61a3a980b382f7726da-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-6-80d8e78ae3\" (UID: \"c123124fd709d61a3a980b382f7726da\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.387928 kubelet[3028]: I0117 12:29:03.387577 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd4bea6ed3a06b70467396141c6dd7b2-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-6-80d8e78ae3\" (UID: \"dd4bea6ed3a06b70467396141c6dd7b2\") " pod="kube-system/kube-scheduler-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:03.387928 kubelet[3028]: I0117 12:29:03.387600 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73062473de15b36a4112abcd9acedba3-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-6-80d8e78ae3\" (UID: \"73062473de15b36a4112abcd9acedba3\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:04.161788 kubelet[3028]: I0117 12:29:04.161716 3028 apiserver.go:52] "Watching apiserver" Jan 17 12:29:04.248426 kubelet[3028]: E0117 12:29:04.248402 3028 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-6-80d8e78ae3\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:04.284999 kubelet[3028]: I0117 12:29:04.284937 3028 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:29:04.291002 kubelet[3028]: I0117 12:29:04.290202 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-6-80d8e78ae3" podStartSLOduration=1.290145522 podStartE2EDuration="1.290145522s" podCreationTimestamp="2025-01-17 12:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:29:04.289060668 +0000 UTC m=+1.201092119" watchObservedRunningTime="2025-01-17 12:29:04.290145522 +0000 UTC m=+1.202176973" Jan 17 12:29:04.291002 kubelet[3028]: I0117 12:29:04.290340 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-6-80d8e78ae3" podStartSLOduration=1.290321852 podStartE2EDuration="1.290321852s" podCreationTimestamp="2025-01-17 12:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:29:04.278209003 +0000 UTC m=+1.190240454" watchObservedRunningTime="2025-01-17 12:29:04.290321852 +0000 UTC m=+1.202353313" Jan 17 12:29:04.302755 kubelet[3028]: I0117 12:29:04.302493 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-6-80d8e78ae3" podStartSLOduration=2.302467584 podStartE2EDuration="2.302467584s" podCreationTimestamp="2025-01-17 12:29:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:29:04.302289209 +0000 UTC m=+1.214320660" watchObservedRunningTime="2025-01-17 12:29:04.302467584 +0000 UTC m=+1.214499035" Jan 17 12:29:07.406660 sudo[2104]: pam_unix(sudo:session): session closed for user root Jan 17 12:29:07.566955 sshd[2100]: pam_unix(sshd:session): session closed for user core Jan 17 12:29:07.570555 systemd[1]: sshd@6-49.12.221.202:22-139.178.89.65:40236.service: Deactivated successfully. Jan 17 12:29:07.575084 systemd-logind[1596]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:29:07.575556 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:29:07.577357 systemd-logind[1596]: Removed session 7. Jan 17 12:29:18.353293 kubelet[3028]: I0117 12:29:18.352872 3028 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:29:18.353743 containerd[1621]: time="2025-01-17T12:29:18.353169677Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:29:18.356688 kubelet[3028]: I0117 12:29:18.355359 3028 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:29:18.429698 kubelet[3028]: I0117 12:29:18.429499 3028 topology_manager.go:215] "Topology Admit Handler" podUID="5df9595e-4055-4098-9a19-f682d4af4a62" podNamespace="kube-system" podName="kube-proxy-749xh" Jan 17 12:29:18.486262 kubelet[3028]: I0117 12:29:18.486232 3028 topology_manager.go:215] "Topology Admit Handler" podUID="13f4db61-6fe9-46aa-8b5e-e3b03bf4634f" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-k8vx8" Jan 17 12:29:18.491620 kubelet[3028]: I0117 12:29:18.491136 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scb5l\" (UniqueName: \"kubernetes.io/projected/5df9595e-4055-4098-9a19-f682d4af4a62-kube-api-access-scb5l\") pod \"kube-proxy-749xh\" (UID: \"5df9595e-4055-4098-9a19-f682d4af4a62\") " pod="kube-system/kube-proxy-749xh" Jan 17 12:29:18.491620 kubelet[3028]: I0117 12:29:18.491169 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5df9595e-4055-4098-9a19-f682d4af4a62-kube-proxy\") pod \"kube-proxy-749xh\" (UID: \"5df9595e-4055-4098-9a19-f682d4af4a62\") " pod="kube-system/kube-proxy-749xh" Jan 17 12:29:18.491620 kubelet[3028]: I0117 12:29:18.491535 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5df9595e-4055-4098-9a19-f682d4af4a62-xtables-lock\") pod \"kube-proxy-749xh\" (UID: \"5df9595e-4055-4098-9a19-f682d4af4a62\") " pod="kube-system/kube-proxy-749xh" Jan 17 12:29:18.491620 kubelet[3028]: I0117 12:29:18.491559 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5df9595e-4055-4098-9a19-f682d4af4a62-lib-modules\") pod \"kube-proxy-749xh\" (UID: \"5df9595e-4055-4098-9a19-f682d4af4a62\") " pod="kube-system/kube-proxy-749xh" Jan 17 12:29:18.595598 kubelet[3028]: I0117 12:29:18.595427 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13f4db61-6fe9-46aa-8b5e-e3b03bf4634f-var-lib-calico\") pod \"tigera-operator-c7ccbd65-k8vx8\" (UID: \"13f4db61-6fe9-46aa-8b5e-e3b03bf4634f\") " pod="tigera-operator/tigera-operator-c7ccbd65-k8vx8" Jan 17 12:29:18.595598 kubelet[3028]: I0117 12:29:18.595479 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvtp\" (UniqueName: \"kubernetes.io/projected/13f4db61-6fe9-46aa-8b5e-e3b03bf4634f-kube-api-access-8qvtp\") pod \"tigera-operator-c7ccbd65-k8vx8\" (UID: \"13f4db61-6fe9-46aa-8b5e-e3b03bf4634f\") " pod="tigera-operator/tigera-operator-c7ccbd65-k8vx8" Jan 17 12:29:18.742737 containerd[1621]: time="2025-01-17T12:29:18.742671857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-749xh,Uid:5df9595e-4055-4098-9a19-f682d4af4a62,Namespace:kube-system,Attempt:0,}" Jan 17 12:29:18.773294 containerd[1621]: time="2025-01-17T12:29:18.773005430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:18.773294 containerd[1621]: time="2025-01-17T12:29:18.773088004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:18.773294 containerd[1621]: time="2025-01-17T12:29:18.773105778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:18.774250 containerd[1621]: time="2025-01-17T12:29:18.774023740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:18.794819 containerd[1621]: time="2025-01-17T12:29:18.794786104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-k8vx8,Uid:13f4db61-6fe9-46aa-8b5e-e3b03bf4634f,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:29:18.823794 containerd[1621]: time="2025-01-17T12:29:18.823632377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:18.823994 containerd[1621]: time="2025-01-17T12:29:18.823728878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:18.823994 containerd[1621]: time="2025-01-17T12:29:18.823743906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:18.824623 containerd[1621]: time="2025-01-17T12:29:18.824578412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:18.831842 containerd[1621]: time="2025-01-17T12:29:18.831735863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-749xh,Uid:5df9595e-4055-4098-9a19-f682d4af4a62,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6770163f0101017324f23d48042e8a5b1637154b0021c23ea3cdc8184022af2\"" Jan 17 12:29:18.834937 containerd[1621]: time="2025-01-17T12:29:18.834910737Z" level=info msg="CreateContainer within sandbox \"a6770163f0101017324f23d48042e8a5b1637154b0021c23ea3cdc8184022af2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:29:18.856252 containerd[1621]: time="2025-01-17T12:29:18.856155467Z" level=info msg="CreateContainer within sandbox \"a6770163f0101017324f23d48042e8a5b1637154b0021c23ea3cdc8184022af2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1c057d25792f40d5bbf7be159df738396f8496e7aeca04ccbe8b3455179b34a2\"" Jan 17 12:29:18.859249 containerd[1621]: time="2025-01-17T12:29:18.856995310Z" level=info msg="StartContainer for \"1c057d25792f40d5bbf7be159df738396f8496e7aeca04ccbe8b3455179b34a2\"" Jan 17 12:29:18.887446 containerd[1621]: time="2025-01-17T12:29:18.887420886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-k8vx8,Uid:13f4db61-6fe9-46aa-8b5e-e3b03bf4634f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2291d112dbe22553eeea0dd250d8a2fe55dd0398577c8c769750b472706fc5da\"" Jan 17 12:29:18.889558 containerd[1621]: time="2025-01-17T12:29:18.889519973Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:29:18.924799 containerd[1621]: time="2025-01-17T12:29:18.924758604Z" level=info msg="StartContainer for \"1c057d25792f40d5bbf7be159df738396f8496e7aeca04ccbe8b3455179b34a2\" returns successfully" Jan 17 12:29:19.274599 kubelet[3028]: I0117 12:29:19.274009 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-749xh" podStartSLOduration=1.2739725499999999 podStartE2EDuration="1.27397255s" podCreationTimestamp="2025-01-17 12:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:29:19.273888423 +0000 UTC m=+16.185919884" watchObservedRunningTime="2025-01-17 12:29:19.27397255 +0000 UTC m=+16.186004001" Jan 17 12:29:22.378646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165408346.mount: Deactivated successfully. Jan 17 12:29:22.734559 containerd[1621]: time="2025-01-17T12:29:22.734441536Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:22.735581 containerd[1621]: time="2025-01-17T12:29:22.735536780Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764321" Jan 17 12:29:22.736444 containerd[1621]: time="2025-01-17T12:29:22.736405058Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:22.738200 containerd[1621]: time="2025-01-17T12:29:22.738147426Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:22.738727 containerd[1621]: time="2025-01-17T12:29:22.738698318Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.849139362s" Jan 17 12:29:22.738770 containerd[1621]: time="2025-01-17T12:29:22.738729287Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:29:22.760714 containerd[1621]: time="2025-01-17T12:29:22.760413882Z" level=info msg="CreateContainer within sandbox \"2291d112dbe22553eeea0dd250d8a2fe55dd0398577c8c769750b472706fc5da\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:29:22.771650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2020395787.mount: Deactivated successfully. Jan 17 12:29:22.772120 containerd[1621]: time="2025-01-17T12:29:22.772089849Z" level=info msg="CreateContainer within sandbox \"2291d112dbe22553eeea0dd250d8a2fe55dd0398577c8c769750b472706fc5da\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"018d831d38f644e5a7615c677be87aa6a210f73f4a7d0ad0369b99f3ed4fcbec\"" Jan 17 12:29:22.773608 containerd[1621]: time="2025-01-17T12:29:22.773583781Z" level=info msg="StartContainer for \"018d831d38f644e5a7615c677be87aa6a210f73f4a7d0ad0369b99f3ed4fcbec\"" Jan 17 12:29:22.827784 containerd[1621]: time="2025-01-17T12:29:22.827001256Z" level=info msg="StartContainer for \"018d831d38f644e5a7615c677be87aa6a210f73f4a7d0ad0369b99f3ed4fcbec\" returns successfully" Jan 17 12:29:23.289779 kubelet[3028]: I0117 12:29:23.289555 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-k8vx8" podStartSLOduration=1.437262217 podStartE2EDuration="5.289403449s" podCreationTimestamp="2025-01-17 12:29:18 +0000 UTC" firstStartedPulling="2025-01-17 12:29:18.888870946 +0000 UTC m=+15.800902397" lastFinishedPulling="2025-01-17 12:29:22.741012178 +0000 UTC m=+19.653043629" observedRunningTime="2025-01-17 12:29:23.287749006 +0000 UTC m=+20.199780467" watchObservedRunningTime="2025-01-17 12:29:23.289403449 +0000 UTC m=+20.201434910" Jan 17 12:29:25.663062 kubelet[3028]: I0117 12:29:25.663027 3028 topology_manager.go:215] "Topology Admit Handler" podUID="eaf9f44c-94bd-4510-b426-db623b339a59" podNamespace="calico-system" podName="calico-typha-5c6657bb57-qqhsx" Jan 17 12:29:25.714151 kubelet[3028]: I0117 12:29:25.714117 3028 topology_manager.go:215] "Topology Admit Handler" podUID="4f286a43-3415-4b62-af04-ec5ba12ddf00" podNamespace="calico-system" podName="calico-node-474vg" Jan 17 12:29:25.748242 kubelet[3028]: I0117 12:29:25.748208 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-449lv\" (UniqueName: \"kubernetes.io/projected/eaf9f44c-94bd-4510-b426-db623b339a59-kube-api-access-449lv\") pod \"calico-typha-5c6657bb57-qqhsx\" (UID: \"eaf9f44c-94bd-4510-b426-db623b339a59\") " pod="calico-system/calico-typha-5c6657bb57-qqhsx" Jan 17 12:29:25.748242 kubelet[3028]: I0117 12:29:25.748239 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-lib-modules\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.748242 kubelet[3028]: I0117 12:29:25.748257 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-xtables-lock\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.748625 kubelet[3028]: I0117 12:29:25.748273 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-var-lib-calico\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.748625 kubelet[3028]: I0117 12:29:25.748321 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-flexvol-driver-host\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.748625 kubelet[3028]: I0117 12:29:25.748352 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f286a43-3415-4b62-af04-ec5ba12ddf00-tigera-ca-bundle\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.748625 kubelet[3028]: I0117 12:29:25.748384 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eaf9f44c-94bd-4510-b426-db623b339a59-typha-certs\") pod \"calico-typha-5c6657bb57-qqhsx\" (UID: \"eaf9f44c-94bd-4510-b426-db623b339a59\") " pod="calico-system/calico-typha-5c6657bb57-qqhsx" Jan 17 12:29:25.748625 kubelet[3028]: I0117 12:29:25.748403 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-policysync\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.748732 kubelet[3028]: I0117 12:29:25.748420 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-log-dir\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.748732 kubelet[3028]: I0117 12:29:25.748445 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaf9f44c-94bd-4510-b426-db623b339a59-tigera-ca-bundle\") pod \"calico-typha-5c6657bb57-qqhsx\" (UID: \"eaf9f44c-94bd-4510-b426-db623b339a59\") " pod="calico-system/calico-typha-5c6657bb57-qqhsx" Jan 17 12:29:25.748732 kubelet[3028]: I0117 12:29:25.748464 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5j9w\" (UniqueName: \"kubernetes.io/projected/4f286a43-3415-4b62-af04-ec5ba12ddf00-kube-api-access-c5j9w\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.748732 kubelet[3028]: I0117 12:29:25.748482 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-bin-dir\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.748732 kubelet[3028]: I0117 12:29:25.748576 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-var-run-calico\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.748828 kubelet[3028]: I0117 12:29:25.748619 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4f286a43-3415-4b62-af04-ec5ba12ddf00-node-certs\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.748828 kubelet[3028]: I0117 12:29:25.748639 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-net-dir\") pod \"calico-node-474vg\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " pod="calico-system/calico-node-474vg" Jan 17 12:29:25.844832 kubelet[3028]: I0117 12:29:25.844797 3028 topology_manager.go:215] "Topology Admit Handler" podUID="8ce471ce-b9be-46d8-bb3c-6f71b777b182" podNamespace="calico-system" podName="csi-node-driver-ffrv6" Jan 17 12:29:25.846365 kubelet[3028]: E0117 12:29:25.845056 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffrv6" podUID="8ce471ce-b9be-46d8-bb3c-6f71b777b182" Jan 17 12:29:25.857702 kubelet[3028]: E0117 12:29:25.857654 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.857911 kubelet[3028]: W0117 12:29:25.857897 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.858057 kubelet[3028]: E0117 12:29:25.858016 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.884214 kubelet[3028]: E0117 12:29:25.883405 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.888295 kubelet[3028]: W0117 12:29:25.887305 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.888295 kubelet[3028]: E0117 12:29:25.887339 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.892618 kubelet[3028]: E0117 12:29:25.892006 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.892618 kubelet[3028]: W0117 12:29:25.892024 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.892618 kubelet[3028]: E0117 12:29:25.892046 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.894027 kubelet[3028]: E0117 12:29:25.894012 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.894162 kubelet[3028]: W0117 12:29:25.894111 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.894162 kubelet[3028]: E0117 12:29:25.894132 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.902755 kubelet[3028]: E0117 12:29:25.902155 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.902755 kubelet[3028]: W0117 12:29:25.902171 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.902755 kubelet[3028]: E0117 12:29:25.902218 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.933302 kubelet[3028]: E0117 12:29:25.932988 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.933302 kubelet[3028]: W0117 12:29:25.933010 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.933302 kubelet[3028]: E0117 12:29:25.933033 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.934497 kubelet[3028]: E0117 12:29:25.933582 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.934497 kubelet[3028]: W0117 12:29:25.933602 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.934497 kubelet[3028]: E0117 12:29:25.933627 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.934870 kubelet[3028]: E0117 12:29:25.934765 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.934870 kubelet[3028]: W0117 12:29:25.934802 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.934870 kubelet[3028]: E0117 12:29:25.934820 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.935431 kubelet[3028]: E0117 12:29:25.935352 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.935431 kubelet[3028]: W0117 12:29:25.935364 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.935431 kubelet[3028]: E0117 12:29:25.935379 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.935746 kubelet[3028]: E0117 12:29:25.935675 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.935746 kubelet[3028]: W0117 12:29:25.935692 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.935746 kubelet[3028]: E0117 12:29:25.935709 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.936085 kubelet[3028]: E0117 12:29:25.935995 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.936085 kubelet[3028]: W0117 12:29:25.936008 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.936085 kubelet[3028]: E0117 12:29:25.936019 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.936388 kubelet[3028]: E0117 12:29:25.936281 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.936388 kubelet[3028]: W0117 12:29:25.936295 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.936388 kubelet[3028]: E0117 12:29:25.936311 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.936671 kubelet[3028]: E0117 12:29:25.936585 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.936671 kubelet[3028]: W0117 12:29:25.936594 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.936671 kubelet[3028]: E0117 12:29:25.936604 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.937249 kubelet[3028]: E0117 12:29:25.936858 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.937249 kubelet[3028]: W0117 12:29:25.936866 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.937249 kubelet[3028]: E0117 12:29:25.936876 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.937249 kubelet[3028]: E0117 12:29:25.937050 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.937249 kubelet[3028]: W0117 12:29:25.937057 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.937249 kubelet[3028]: E0117 12:29:25.937082 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.937652 kubelet[3028]: E0117 12:29:25.937487 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.937652 kubelet[3028]: W0117 12:29:25.937501 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.937652 kubelet[3028]: E0117 12:29:25.937512 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.937973 kubelet[3028]: E0117 12:29:25.937777 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.937973 kubelet[3028]: W0117 12:29:25.937821 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.937973 kubelet[3028]: E0117 12:29:25.937831 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.938340 kubelet[3028]: E0117 12:29:25.938329 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.938432 kubelet[3028]: W0117 12:29:25.938421 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.938582 kubelet[3028]: E0117 12:29:25.938500 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.938841 kubelet[3028]: E0117 12:29:25.938808 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.938972 kubelet[3028]: W0117 12:29:25.938902 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.938972 kubelet[3028]: E0117 12:29:25.938937 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.939247 kubelet[3028]: E0117 12:29:25.939236 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.939508 kubelet[3028]: W0117 12:29:25.939345 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.939508 kubelet[3028]: E0117 12:29:25.939392 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.939711 kubelet[3028]: E0117 12:29:25.939696 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.939869 kubelet[3028]: W0117 12:29:25.939774 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.939869 kubelet[3028]: E0117 12:29:25.939790 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.940114 kubelet[3028]: E0117 12:29:25.940087 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.940245 kubelet[3028]: W0117 12:29:25.940166 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.940310 kubelet[3028]: E0117 12:29:25.940297 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.941151 kubelet[3028]: E0117 12:29:25.940977 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.941151 kubelet[3028]: W0117 12:29:25.940990 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.941151 kubelet[3028]: E0117 12:29:25.941001 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.941756 kubelet[3028]: E0117 12:29:25.941554 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.941756 kubelet[3028]: W0117 12:29:25.941566 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.941756 kubelet[3028]: E0117 12:29:25.941581 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.942138 kubelet[3028]: E0117 12:29:25.942116 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.942337 kubelet[3028]: W0117 12:29:25.942225 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.942337 kubelet[3028]: E0117 12:29:25.942238 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.950439 kubelet[3028]: E0117 12:29:25.950421 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.950693 kubelet[3028]: W0117 12:29:25.950518 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.950693 kubelet[3028]: E0117 12:29:25.950535 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.950693 kubelet[3028]: I0117 12:29:25.950577 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ce471ce-b9be-46d8-bb3c-6f71b777b182-kubelet-dir\") pod \"csi-node-driver-ffrv6\" (UID: \"8ce471ce-b9be-46d8-bb3c-6f71b777b182\") " pod="calico-system/csi-node-driver-ffrv6" Jan 17 12:29:25.951229 kubelet[3028]: E0117 12:29:25.950913 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.951229 kubelet[3028]: W0117 12:29:25.950929 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.951229 kubelet[3028]: E0117 12:29:25.950951 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.951229 kubelet[3028]: I0117 12:29:25.950971 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8ce471ce-b9be-46d8-bb3c-6f71b777b182-socket-dir\") pod \"csi-node-driver-ffrv6\" (UID: \"8ce471ce-b9be-46d8-bb3c-6f71b777b182\") " pod="calico-system/csi-node-driver-ffrv6" Jan 17 12:29:25.951553 kubelet[3028]: E0117 12:29:25.951426 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.951553 kubelet[3028]: W0117 12:29:25.951439 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.951553 kubelet[3028]: E0117 12:29:25.951456 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.951553 kubelet[3028]: I0117 12:29:25.951475 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8ce471ce-b9be-46d8-bb3c-6f71b777b182-registration-dir\") pod \"csi-node-driver-ffrv6\" (UID: \"8ce471ce-b9be-46d8-bb3c-6f71b777b182\") " pod="calico-system/csi-node-driver-ffrv6" Jan 17 12:29:25.951702 kubelet[3028]: E0117 12:29:25.951690 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.951752 kubelet[3028]: W0117 12:29:25.951742 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.951815 kubelet[3028]: E0117 12:29:25.951805 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.951879 kubelet[3028]: I0117 12:29:25.951870 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8ce471ce-b9be-46d8-bb3c-6f71b777b182-varrun\") pod \"csi-node-driver-ffrv6\" (UID: \"8ce471ce-b9be-46d8-bb3c-6f71b777b182\") " pod="calico-system/csi-node-driver-ffrv6" Jan 17 12:29:25.952077 kubelet[3028]: E0117 12:29:25.952052 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.952077 kubelet[3028]: W0117 12:29:25.952071 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.952152 kubelet[3028]: E0117 12:29:25.952093 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.952344 kubelet[3028]: E0117 12:29:25.952318 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.952344 kubelet[3028]: W0117 12:29:25.952333 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.952461 kubelet[3028]: E0117 12:29:25.952422 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.952523 kubelet[3028]: E0117 12:29:25.952509 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.952523 kubelet[3028]: W0117 12:29:25.952521 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.952701 kubelet[3028]: E0117 12:29:25.952601 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.952759 kubelet[3028]: E0117 12:29:25.952737 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.952759 kubelet[3028]: W0117 12:29:25.952749 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.952811 kubelet[3028]: E0117 12:29:25.952765 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.952971 kubelet[3028]: E0117 12:29:25.952949 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.952971 kubelet[3028]: W0117 12:29:25.952966 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.953020 kubelet[3028]: E0117 12:29:25.952979 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.953020 kubelet[3028]: I0117 12:29:25.953006 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfbjf\" (UniqueName: \"kubernetes.io/projected/8ce471ce-b9be-46d8-bb3c-6f71b777b182-kube-api-access-kfbjf\") pod \"csi-node-driver-ffrv6\" (UID: \"8ce471ce-b9be-46d8-bb3c-6f71b777b182\") " pod="calico-system/csi-node-driver-ffrv6" Jan 17 12:29:25.953244 kubelet[3028]: E0117 12:29:25.953219 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.953244 kubelet[3028]: W0117 12:29:25.953233 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.953305 kubelet[3028]: E0117 12:29:25.953248 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.953478 kubelet[3028]: E0117 12:29:25.953455 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.953478 kubelet[3028]: W0117 12:29:25.953469 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.953478 kubelet[3028]: E0117 12:29:25.953480 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.953738 kubelet[3028]: E0117 12:29:25.953715 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.953738 kubelet[3028]: W0117 12:29:25.953729 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.953791 kubelet[3028]: E0117 12:29:25.953743 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.953947 kubelet[3028]: E0117 12:29:25.953934 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.953947 kubelet[3028]: W0117 12:29:25.953944 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.954008 kubelet[3028]: E0117 12:29:25.953954 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.954146 kubelet[3028]: E0117 12:29:25.954129 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.954146 kubelet[3028]: W0117 12:29:25.954140 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.954146 kubelet[3028]: E0117 12:29:25.954150 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.954362 kubelet[3028]: E0117 12:29:25.954348 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:25.954362 kubelet[3028]: W0117 12:29:25.954358 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:25.954362 kubelet[3028]: E0117 12:29:25.954369 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:25.972831 containerd[1621]: time="2025-01-17T12:29:25.972748809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c6657bb57-qqhsx,Uid:eaf9f44c-94bd-4510-b426-db623b339a59,Namespace:calico-system,Attempt:0,}" Jan 17 12:29:26.007943 containerd[1621]: time="2025-01-17T12:29:26.007725975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:26.007943 containerd[1621]: time="2025-01-17T12:29:26.007832515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:26.008084 containerd[1621]: time="2025-01-17T12:29:26.007913597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:26.010034 containerd[1621]: time="2025-01-17T12:29:26.008732724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:26.022048 containerd[1621]: time="2025-01-17T12:29:26.021845294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-474vg,Uid:4f286a43-3415-4b62-af04-ec5ba12ddf00,Namespace:calico-system,Attempt:0,}" Jan 17 12:29:26.057863 kubelet[3028]: E0117 12:29:26.057830 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.057863 kubelet[3028]: W0117 12:29:26.057854 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.057863 kubelet[3028]: E0117 12:29:26.057896 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.059651 kubelet[3028]: E0117 12:29:26.059615 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.059651 kubelet[3028]: W0117 12:29:26.059634 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.059651 kubelet[3028]: E0117 12:29:26.059652 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.060933 kubelet[3028]: E0117 12:29:26.060100 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.060933 kubelet[3028]: W0117 12:29:26.060114 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.060933 kubelet[3028]: E0117 12:29:26.060297 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.061401 kubelet[3028]: E0117 12:29:26.061379 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.061401 kubelet[3028]: W0117 12:29:26.061395 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.061797 kubelet[3028]: E0117 12:29:26.061408 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.063349 kubelet[3028]: E0117 12:29:26.063325 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.063349 kubelet[3028]: W0117 12:29:26.063341 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.063349 kubelet[3028]: E0117 12:29:26.063353 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.065038 kubelet[3028]: E0117 12:29:26.064982 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.065038 kubelet[3028]: W0117 12:29:26.064997 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.065493 kubelet[3028]: E0117 12:29:26.065475 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.065679 kubelet[3028]: W0117 12:29:26.065560 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.065679 kubelet[3028]: E0117 12:29:26.065588 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.065840 kubelet[3028]: E0117 12:29:26.065822 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.066094 kubelet[3028]: E0117 12:29:26.065960 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.066094 kubelet[3028]: W0117 12:29:26.065981 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.066094 kubelet[3028]: E0117 12:29:26.066009 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.066310 kubelet[3028]: E0117 12:29:26.066299 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.066399 kubelet[3028]: W0117 12:29:26.066386 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.066488 kubelet[3028]: E0117 12:29:26.066478 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.067660 kubelet[3028]: E0117 12:29:26.067640 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.067786 kubelet[3028]: W0117 12:29:26.067655 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.067786 kubelet[3028]: E0117 12:29:26.067795 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.069119 kubelet[3028]: E0117 12:29:26.069027 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.069119 kubelet[3028]: W0117 12:29:26.069040 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.069427 kubelet[3028]: E0117 12:29:26.069384 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.070626 kubelet[3028]: E0117 12:29:26.070335 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.070626 kubelet[3028]: W0117 12:29:26.070347 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.070868 kubelet[3028]: E0117 12:29:26.070782 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.070968 kubelet[3028]: W0117 12:29:26.070918 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.071395 kubelet[3028]: E0117 12:29:26.071340 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.071395 kubelet[3028]: W0117 12:29:26.071352 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.071743 kubelet[3028]: E0117 12:29:26.071719 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.073046 kubelet[3028]: E0117 12:29:26.072389 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.073046 kubelet[3028]: W0117 12:29:26.072560 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.073046 kubelet[3028]: E0117 12:29:26.072399 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.073046 kubelet[3028]: E0117 12:29:26.072406 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.073046 kubelet[3028]: E0117 12:29:26.072700 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.073960 kubelet[3028]: E0117 12:29:26.073938 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.073960 kubelet[3028]: W0117 12:29:26.073954 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.074380 kubelet[3028]: E0117 12:29:26.074358 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.074380 kubelet[3028]: W0117 12:29:26.074373 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.074639 kubelet[3028]: E0117 12:29:26.074615 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.074692 kubelet[3028]: E0117 12:29:26.074650 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.075057 kubelet[3028]: E0117 12:29:26.075025 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.075057 kubelet[3028]: W0117 12:29:26.075040 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.075235 kubelet[3028]: E0117 12:29:26.075211 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.076300 kubelet[3028]: E0117 12:29:26.076279 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.076300 kubelet[3028]: W0117 12:29:26.076294 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.076370 kubelet[3028]: E0117 12:29:26.076311 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.076541 kubelet[3028]: E0117 12:29:26.076522 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.076541 kubelet[3028]: W0117 12:29:26.076534 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.076603 kubelet[3028]: E0117 12:29:26.076583 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.076788 kubelet[3028]: E0117 12:29:26.076772 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.076788 kubelet[3028]: W0117 12:29:26.076784 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.076897 kubelet[3028]: E0117 12:29:26.076866 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.077046 kubelet[3028]: E0117 12:29:26.077029 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.077046 kubelet[3028]: W0117 12:29:26.077042 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.077532 kubelet[3028]: E0117 12:29:26.077131 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.077532 kubelet[3028]: E0117 12:29:26.077313 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.077532 kubelet[3028]: W0117 12:29:26.077321 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.077532 kubelet[3028]: E0117 12:29:26.077334 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.079868 kubelet[3028]: E0117 12:29:26.077804 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.079868 kubelet[3028]: W0117 12:29:26.077818 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.079868 kubelet[3028]: E0117 12:29:26.077855 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.080952 kubelet[3028]: E0117 12:29:26.080533 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.080952 kubelet[3028]: W0117 12:29:26.080551 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.080952 kubelet[3028]: E0117 12:29:26.080567 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.101202 kubelet[3028]: E0117 12:29:26.100877 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:26.101202 kubelet[3028]: W0117 12:29:26.100922 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:26.101202 kubelet[3028]: E0117 12:29:26.100944 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:26.105393 containerd[1621]: time="2025-01-17T12:29:26.104326250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:26.105393 containerd[1621]: time="2025-01-17T12:29:26.104395530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:26.105393 containerd[1621]: time="2025-01-17T12:29:26.104419234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:26.105393 containerd[1621]: time="2025-01-17T12:29:26.104523410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:26.167975 containerd[1621]: time="2025-01-17T12:29:26.167938438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c6657bb57-qqhsx,Uid:eaf9f44c-94bd-4510-b426-db623b339a59,Namespace:calico-system,Attempt:0,} returns sandbox id \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\"" Jan 17 12:29:26.177759 containerd[1621]: time="2025-01-17T12:29:26.177689976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:29:26.186816 containerd[1621]: time="2025-01-17T12:29:26.186620515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-474vg,Uid:4f286a43-3415-4b62-af04-ec5ba12ddf00,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\"" Jan 17 12:29:27.734800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293771320.mount: Deactivated successfully. Jan 17 12:29:28.216610 kubelet[3028]: E0117 12:29:28.216464 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffrv6" podUID="8ce471ce-b9be-46d8-bb3c-6f71b777b182" Jan 17 12:29:29.075544 containerd[1621]: time="2025-01-17T12:29:29.075482549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:29.076382 containerd[1621]: time="2025-01-17T12:29:29.076223959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:29:29.076964 containerd[1621]: time="2025-01-17T12:29:29.076910386Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:29.078597 containerd[1621]: time="2025-01-17T12:29:29.078556824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:29.079139 containerd[1621]: time="2025-01-17T12:29:29.079055409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.901331829s" Jan 17 12:29:29.079139 containerd[1621]: time="2025-01-17T12:29:29.079088812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:29:29.080812 containerd[1621]: time="2025-01-17T12:29:29.080569438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:29:29.092310 containerd[1621]: time="2025-01-17T12:29:29.092275773Z" level=info msg="CreateContainer within sandbox \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:29:29.109136 containerd[1621]: time="2025-01-17T12:29:29.108975674Z" level=info msg="CreateContainer within sandbox \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\"" Jan 17 12:29:29.109736 containerd[1621]: time="2025-01-17T12:29:29.109522920Z" level=info msg="StartContainer for \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\"" Jan 17 12:29:29.196778 containerd[1621]: time="2025-01-17T12:29:29.196713282Z" level=info msg="StartContainer for \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\" returns successfully" Jan 17 12:29:29.365596 kubelet[3028]: E0117 12:29:29.365562 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.365596 kubelet[3028]: W0117 12:29:29.365588 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.366113 kubelet[3028]: E0117 12:29:29.365612 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.366113 kubelet[3028]: E0117 12:29:29.365937 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.366113 kubelet[3028]: W0117 12:29:29.365948 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.366113 kubelet[3028]: E0117 12:29:29.365982 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.366931 kubelet[3028]: E0117 12:29:29.366910 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.367299 kubelet[3028]: W0117 12:29:29.367006 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.367299 kubelet[3028]: E0117 12:29:29.367038 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.367575 kubelet[3028]: E0117 12:29:29.367539 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.367575 kubelet[3028]: W0117 12:29:29.367554 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.367575 kubelet[3028]: E0117 12:29:29.367569 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.367841 kubelet[3028]: E0117 12:29:29.367772 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.367841 kubelet[3028]: W0117 12:29:29.367781 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.367841 kubelet[3028]: E0117 12:29:29.367792 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.368618 kubelet[3028]: E0117 12:29:29.367983 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.368618 kubelet[3028]: W0117 12:29:29.367991 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.368618 kubelet[3028]: E0117 12:29:29.368003 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.368618 kubelet[3028]: E0117 12:29:29.368199 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.368618 kubelet[3028]: W0117 12:29:29.368206 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.368618 kubelet[3028]: E0117 12:29:29.368217 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.368929 kubelet[3028]: E0117 12:29:29.368789 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.368929 kubelet[3028]: W0117 12:29:29.368800 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.368929 kubelet[3028]: E0117 12:29:29.368811 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.369258 kubelet[3028]: E0117 12:29:29.369080 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.369258 kubelet[3028]: W0117 12:29:29.369092 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.369258 kubelet[3028]: E0117 12:29:29.369125 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.369455 kubelet[3028]: E0117 12:29:29.369415 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.369455 kubelet[3028]: W0117 12:29:29.369433 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.369455 kubelet[3028]: E0117 12:29:29.369444 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.369745 kubelet[3028]: E0117 12:29:29.369720 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.369745 kubelet[3028]: W0117 12:29:29.369737 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.369819 kubelet[3028]: E0117 12:29:29.369754 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.370049 kubelet[3028]: E0117 12:29:29.370028 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.370049 kubelet[3028]: W0117 12:29:29.370044 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.370105 kubelet[3028]: E0117 12:29:29.370057 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.370351 kubelet[3028]: E0117 12:29:29.370330 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.370351 kubelet[3028]: W0117 12:29:29.370344 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.370427 kubelet[3028]: E0117 12:29:29.370356 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.370652 kubelet[3028]: E0117 12:29:29.370579 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.370652 kubelet[3028]: W0117 12:29:29.370593 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.370652 kubelet[3028]: E0117 12:29:29.370604 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.370806 kubelet[3028]: E0117 12:29:29.370785 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.370806 kubelet[3028]: W0117 12:29:29.370799 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.370918 kubelet[3028]: E0117 12:29:29.370810 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.386399 kubelet[3028]: E0117 12:29:29.386365 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.386399 kubelet[3028]: W0117 12:29:29.386388 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.386399 kubelet[3028]: E0117 12:29:29.386405 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.386676 kubelet[3028]: E0117 12:29:29.386658 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.386676 kubelet[3028]: W0117 12:29:29.386671 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.386818 kubelet[3028]: E0117 12:29:29.386687 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.387005 kubelet[3028]: E0117 12:29:29.386974 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.387005 kubelet[3028]: W0117 12:29:29.386995 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.387138 kubelet[3028]: E0117 12:29:29.387019 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.387351 kubelet[3028]: E0117 12:29:29.387334 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.387351 kubelet[3028]: W0117 12:29:29.387347 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.387480 kubelet[3028]: E0117 12:29:29.387367 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.387663 kubelet[3028]: E0117 12:29:29.387637 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.387663 kubelet[3028]: W0117 12:29:29.387654 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.387736 kubelet[3028]: E0117 12:29:29.387675 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.388022 kubelet[3028]: E0117 12:29:29.387932 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.388022 kubelet[3028]: W0117 12:29:29.387945 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.388022 kubelet[3028]: E0117 12:29:29.387985 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.388227 kubelet[3028]: E0117 12:29:29.388172 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.388227 kubelet[3028]: W0117 12:29:29.388216 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.388373 kubelet[3028]: E0117 12:29:29.388280 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.388399 kubelet[3028]: E0117 12:29:29.388390 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.388399 kubelet[3028]: W0117 12:29:29.388397 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.388984 kubelet[3028]: E0117 12:29:29.388521 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.388984 kubelet[3028]: E0117 12:29:29.388563 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.388984 kubelet[3028]: W0117 12:29:29.388570 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.388984 kubelet[3028]: E0117 12:29:29.388585 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.388984 kubelet[3028]: E0117 12:29:29.388904 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.388984 kubelet[3028]: W0117 12:29:29.388915 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.388984 kubelet[3028]: E0117 12:29:29.388935 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.389229 kubelet[3028]: E0117 12:29:29.389207 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.389229 kubelet[3028]: W0117 12:29:29.389222 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.389299 kubelet[3028]: E0117 12:29:29.389250 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.389523 kubelet[3028]: E0117 12:29:29.389502 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.389523 kubelet[3028]: W0117 12:29:29.389519 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.389591 kubelet[3028]: E0117 12:29:29.389549 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.390002 kubelet[3028]: E0117 12:29:29.389979 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.390002 kubelet[3028]: W0117 12:29:29.389997 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.390115 kubelet[3028]: E0117 12:29:29.390096 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.390396 kubelet[3028]: E0117 12:29:29.390304 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.390396 kubelet[3028]: W0117 12:29:29.390317 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.390396 kubelet[3028]: E0117 12:29:29.390356 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.390587 kubelet[3028]: E0117 12:29:29.390559 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.390587 kubelet[3028]: W0117 12:29:29.390575 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.390715 kubelet[3028]: E0117 12:29:29.390598 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.390895 kubelet[3028]: E0117 12:29:29.390850 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.390895 kubelet[3028]: W0117 12:29:29.390884 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.390895 kubelet[3028]: E0117 12:29:29.390900 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.391171 kubelet[3028]: E0117 12:29:29.391144 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.391171 kubelet[3028]: W0117 12:29:29.391158 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.391171 kubelet[3028]: E0117 12:29:29.391170 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:29.391651 kubelet[3028]: E0117 12:29:29.391633 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:29.391748 kubelet[3028]: W0117 12:29:29.391720 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:29.391748 kubelet[3028]: E0117 12:29:29.391746 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.216505 kubelet[3028]: E0117 12:29:30.216452 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffrv6" podUID="8ce471ce-b9be-46d8-bb3c-6f71b777b182" Jan 17 12:29:30.290296 kubelet[3028]: I0117 12:29:30.290257 3028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:29:30.379109 kubelet[3028]: E0117 12:29:30.379077 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.380389 kubelet[3028]: W0117 12:29:30.379674 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.380389 kubelet[3028]: E0117 12:29:30.379716 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.380389 kubelet[3028]: E0117 12:29:30.380088 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.380389 kubelet[3028]: W0117 12:29:30.380101 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.380389 kubelet[3028]: E0117 12:29:30.380123 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.381603 kubelet[3028]: E0117 12:29:30.380467 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.381603 kubelet[3028]: W0117 12:29:30.380478 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.381603 kubelet[3028]: E0117 12:29:30.380496 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.382874 kubelet[3028]: E0117 12:29:30.382567 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.382874 kubelet[3028]: W0117 12:29:30.382582 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.382874 kubelet[3028]: E0117 12:29:30.382619 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.383263 kubelet[3028]: E0117 12:29:30.383073 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.383263 kubelet[3028]: W0117 12:29:30.383100 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.383263 kubelet[3028]: E0117 12:29:30.383117 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.384158 kubelet[3028]: E0117 12:29:30.384082 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.384158 kubelet[3028]: W0117 12:29:30.384095 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.384158 kubelet[3028]: E0117 12:29:30.384107 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.384681 kubelet[3028]: E0117 12:29:30.384578 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.384681 kubelet[3028]: W0117 12:29:30.384594 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.384681 kubelet[3028]: E0117 12:29:30.384611 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.385279 kubelet[3028]: E0117 12:29:30.385160 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.385279 kubelet[3028]: W0117 12:29:30.385190 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.385279 kubelet[3028]: E0117 12:29:30.385213 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.385697 kubelet[3028]: E0117 12:29:30.385607 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.385697 kubelet[3028]: W0117 12:29:30.385618 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.385697 kubelet[3028]: E0117 12:29:30.385629 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.386435 kubelet[3028]: E0117 12:29:30.385924 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.386435 kubelet[3028]: W0117 12:29:30.385933 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.386435 kubelet[3028]: E0117 12:29:30.385945 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.387203 kubelet[3028]: E0117 12:29:30.387105 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.387203 kubelet[3028]: W0117 12:29:30.387117 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.387203 kubelet[3028]: E0117 12:29:30.387128 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.387578 kubelet[3028]: E0117 12:29:30.387472 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.387578 kubelet[3028]: W0117 12:29:30.387482 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.387578 kubelet[3028]: E0117 12:29:30.387493 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.387897 kubelet[3028]: E0117 12:29:30.387815 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.387897 kubelet[3028]: W0117 12:29:30.387825 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.387897 kubelet[3028]: E0117 12:29:30.387835 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.388369 kubelet[3028]: E0117 12:29:30.388273 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.388369 kubelet[3028]: W0117 12:29:30.388283 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.388369 kubelet[3028]: E0117 12:29:30.388293 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.388669 kubelet[3028]: E0117 12:29:30.388587 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.388669 kubelet[3028]: W0117 12:29:30.388598 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.388669 kubelet[3028]: E0117 12:29:30.388608 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.395067 kubelet[3028]: E0117 12:29:30.395017 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.395067 kubelet[3028]: W0117 12:29:30.395064 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.395160 kubelet[3028]: E0117 12:29:30.395077 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.395405 kubelet[3028]: E0117 12:29:30.395371 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.395405 kubelet[3028]: W0117 12:29:30.395382 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.395491 kubelet[3028]: E0117 12:29:30.395409 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.395651 kubelet[3028]: E0117 12:29:30.395629 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.395703 kubelet[3028]: W0117 12:29:30.395661 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.395703 kubelet[3028]: E0117 12:29:30.395685 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.396077 kubelet[3028]: E0117 12:29:30.396061 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.396077 kubelet[3028]: W0117 12:29:30.396071 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.396162 kubelet[3028]: E0117 12:29:30.396086 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.396331 kubelet[3028]: E0117 12:29:30.396317 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.396331 kubelet[3028]: W0117 12:29:30.396330 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.396412 kubelet[3028]: E0117 12:29:30.396400 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.396649 kubelet[3028]: E0117 12:29:30.396619 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.396649 kubelet[3028]: W0117 12:29:30.396630 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.396649 kubelet[3028]: E0117 12:29:30.396644 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.396868 kubelet[3028]: E0117 12:29:30.396841 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.396868 kubelet[3028]: W0117 12:29:30.396864 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.397022 kubelet[3028]: E0117 12:29:30.396887 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.397275 kubelet[3028]: E0117 12:29:30.397210 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.397394 kubelet[3028]: W0117 12:29:30.397221 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.397445 kubelet[3028]: E0117 12:29:30.397400 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.397645 kubelet[3028]: E0117 12:29:30.397602 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.397645 kubelet[3028]: W0117 12:29:30.397613 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.397703 kubelet[3028]: E0117 12:29:30.397672 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.397917 kubelet[3028]: E0117 12:29:30.397900 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.397917 kubelet[3028]: W0117 12:29:30.397911 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.398041 kubelet[3028]: E0117 12:29:30.398020 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.398262 kubelet[3028]: E0117 12:29:30.398245 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.398262 kubelet[3028]: W0117 12:29:30.398256 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.398486 kubelet[3028]: E0117 12:29:30.398474 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.398486 kubelet[3028]: W0117 12:29:30.398485 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.398567 kubelet[3028]: E0117 12:29:30.398558 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.398600 kubelet[3028]: E0117 12:29:30.398586 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.398738 kubelet[3028]: E0117 12:29:30.398718 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.398738 kubelet[3028]: W0117 12:29:30.398731 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.398805 kubelet[3028]: E0117 12:29:30.398745 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.399094 kubelet[3028]: E0117 12:29:30.399076 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.399141 kubelet[3028]: W0117 12:29:30.399124 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.399141 kubelet[3028]: E0117 12:29:30.399139 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.399418 kubelet[3028]: E0117 12:29:30.399408 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.399418 kubelet[3028]: W0117 12:29:30.399417 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.399476 kubelet[3028]: E0117 12:29:30.399428 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.399704 kubelet[3028]: E0117 12:29:30.399672 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.399704 kubelet[3028]: W0117 12:29:30.399691 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.399704 kubelet[3028]: E0117 12:29:30.399702 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.399949 kubelet[3028]: E0117 12:29:30.399931 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.399949 kubelet[3028]: W0117 12:29:30.399943 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.400001 kubelet[3028]: E0117 12:29:30.399963 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.400380 containerd[1621]: time="2025-01-17T12:29:30.400353840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:30.401473 kubelet[3028]: E0117 12:29:30.401247 3028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:29:30.401473 kubelet[3028]: W0117 12:29:30.401259 3028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:29:30.401473 kubelet[3028]: E0117 12:29:30.401269 3028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:29:30.401835 containerd[1621]: time="2025-01-17T12:29:30.401800453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:29:30.407872 containerd[1621]: time="2025-01-17T12:29:30.407799935Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:30.421642 containerd[1621]: time="2025-01-17T12:29:30.421600958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:30.422538 containerd[1621]: time="2025-01-17T12:29:30.422160788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.341553729s" Jan 17 12:29:30.422538 containerd[1621]: time="2025-01-17T12:29:30.422301642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:29:30.426328 containerd[1621]: time="2025-01-17T12:29:30.426300532Z" level=info msg="CreateContainer within sandbox \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:29:30.488914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596114768.mount: Deactivated successfully. Jan 17 12:29:30.491571 containerd[1621]: time="2025-01-17T12:29:30.489631709Z" level=info msg="CreateContainer within sandbox \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b\"" Jan 17 12:29:30.493135 containerd[1621]: time="2025-01-17T12:29:30.493103119Z" level=info msg="StartContainer for \"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b\"" Jan 17 12:29:30.554007 containerd[1621]: time="2025-01-17T12:29:30.553941620Z" level=info msg="StartContainer for \"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b\" returns successfully" Jan 17 12:29:30.589093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b-rootfs.mount: Deactivated successfully. Jan 17 12:29:30.607678 containerd[1621]: time="2025-01-17T12:29:30.596409599Z" level=info msg="shim disconnected" id=d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b namespace=k8s.io Jan 17 12:29:30.607678 containerd[1621]: time="2025-01-17T12:29:30.607592621Z" level=warning msg="cleaning up after shim disconnected" id=d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b namespace=k8s.io Jan 17 12:29:30.607678 containerd[1621]: time="2025-01-17T12:29:30.607605636Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:29:31.295687 containerd[1621]: time="2025-01-17T12:29:31.295538163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:29:31.309962 kubelet[3028]: I0117 12:29:31.308455 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5c6657bb57-qqhsx" podStartSLOduration=3.403254443 podStartE2EDuration="6.308417066s" podCreationTimestamp="2025-01-17 12:29:25 +0000 UTC" firstStartedPulling="2025-01-17 12:29:26.174325617 +0000 UTC m=+23.086357069" lastFinishedPulling="2025-01-17 12:29:29.079488241 +0000 UTC m=+25.991519692" observedRunningTime="2025-01-17 12:29:29.300117953 +0000 UTC m=+26.212149414" watchObservedRunningTime="2025-01-17 12:29:31.308417066 +0000 UTC m=+28.220448518" Jan 17 12:29:32.216873 kubelet[3028]: E0117 12:29:32.216741 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffrv6" podUID="8ce471ce-b9be-46d8-bb3c-6f71b777b182" Jan 17 12:29:33.900795 containerd[1621]: time="2025-01-17T12:29:33.900734735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:33.901601 containerd[1621]: time="2025-01-17T12:29:33.901440949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:29:33.902136 containerd[1621]: time="2025-01-17T12:29:33.902089145Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:33.903954 containerd[1621]: time="2025-01-17T12:29:33.903934045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:33.904663 containerd[1621]: time="2025-01-17T12:29:33.904527618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.608766207s" Jan 17 12:29:33.904663 containerd[1621]: time="2025-01-17T12:29:33.904556472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:29:33.906549 containerd[1621]: time="2025-01-17T12:29:33.906516869Z" level=info msg="CreateContainer within sandbox \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:29:33.951917 containerd[1621]: time="2025-01-17T12:29:33.951873267Z" level=info msg="CreateContainer within sandbox \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9\"" Jan 17 12:29:33.952562 containerd[1621]: time="2025-01-17T12:29:33.952528756Z" level=info msg="StartContainer for \"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9\"" Jan 17 12:29:34.051597 containerd[1621]: time="2025-01-17T12:29:34.051530316Z" level=info msg="StartContainer for \"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9\" returns successfully" Jan 17 12:29:34.218283 kubelet[3028]: E0117 12:29:34.216887 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffrv6" podUID="8ce471ce-b9be-46d8-bb3c-6f71b777b182" Jan 17 12:29:34.507678 kubelet[3028]: I0117 12:29:34.506982 3028 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:29:34.519865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9-rootfs.mount: Deactivated successfully. Jan 17 12:29:34.524486 containerd[1621]: time="2025-01-17T12:29:34.524335690Z" level=info msg="shim disconnected" id=7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9 namespace=k8s.io Jan 17 12:29:34.524486 containerd[1621]: time="2025-01-17T12:29:34.524427082Z" level=warning msg="cleaning up after shim disconnected" id=7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9 namespace=k8s.io Jan 17 12:29:34.524486 containerd[1621]: time="2025-01-17T12:29:34.524437944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:29:34.542149 kubelet[3028]: I0117 12:29:34.541975 3028 topology_manager.go:215] "Topology Admit Handler" podUID="119d39cc-29db-4e21-9625-748b3f95dac1" podNamespace="kube-system" podName="coredns-76f75df574-cgsrh" Jan 17 12:29:34.548670 kubelet[3028]: I0117 12:29:34.548639 3028 topology_manager.go:215] "Topology Admit Handler" podUID="5a63cf5e-1b3a-4788-8363-8dde4df3c8b8" podNamespace="calico-system" podName="calico-kube-controllers-5bbfc69fcf-t5lf6" Jan 17 12:29:34.549738 kubelet[3028]: I0117 12:29:34.548764 3028 topology_manager.go:215] "Topology Admit Handler" podUID="b65d841a-9846-425f-b276-4a75e065c2c8" podNamespace="calico-apiserver" podName="calico-apiserver-66d8498c4f-r6tfl" Jan 17 12:29:34.549738 kubelet[3028]: I0117 12:29:34.548867 3028 topology_manager.go:215] "Topology Admit Handler" podUID="c72f34c0-75e3-46f8-899b-6f07ace470c9" podNamespace="calico-apiserver" podName="calico-apiserver-66d8498c4f-4mtqh" Jan 17 12:29:34.563735 kubelet[3028]: I0117 12:29:34.563711 3028 topology_manager.go:215] "Topology Admit Handler" podUID="acf7efb0-d6d1-45db-b95d-bc14baf6f897" podNamespace="kube-system" podName="coredns-76f75df574-p8c8v" Jan 17 12:29:34.630298 kubelet[3028]: I0117 12:29:34.630268 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6486\" (UniqueName: \"kubernetes.io/projected/119d39cc-29db-4e21-9625-748b3f95dac1-kube-api-access-j6486\") pod \"coredns-76f75df574-cgsrh\" (UID: \"119d39cc-29db-4e21-9625-748b3f95dac1\") " pod="kube-system/coredns-76f75df574-cgsrh" Jan 17 12:29:34.630298 kubelet[3028]: I0117 12:29:34.630308 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skk9m\" (UniqueName: \"kubernetes.io/projected/5a63cf5e-1b3a-4788-8363-8dde4df3c8b8-kube-api-access-skk9m\") pod \"calico-kube-controllers-5bbfc69fcf-t5lf6\" (UID: \"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8\") " pod="calico-system/calico-kube-controllers-5bbfc69fcf-t5lf6" Jan 17 12:29:34.630624 kubelet[3028]: I0117 12:29:34.630331 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acf7efb0-d6d1-45db-b95d-bc14baf6f897-config-volume\") pod \"coredns-76f75df574-p8c8v\" (UID: \"acf7efb0-d6d1-45db-b95d-bc14baf6f897\") " pod="kube-system/coredns-76f75df574-p8c8v" Jan 17 12:29:34.630624 kubelet[3028]: I0117 12:29:34.630351 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b65d841a-9846-425f-b276-4a75e065c2c8-calico-apiserver-certs\") pod \"calico-apiserver-66d8498c4f-r6tfl\" (UID: \"b65d841a-9846-425f-b276-4a75e065c2c8\") " pod="calico-apiserver/calico-apiserver-66d8498c4f-r6tfl" Jan 17 12:29:34.630624 kubelet[3028]: I0117 12:29:34.630373 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/119d39cc-29db-4e21-9625-748b3f95dac1-config-volume\") pod \"coredns-76f75df574-cgsrh\" (UID: \"119d39cc-29db-4e21-9625-748b3f95dac1\") " pod="kube-system/coredns-76f75df574-cgsrh" Jan 17 12:29:34.630624 kubelet[3028]: I0117 12:29:34.630392 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9s4t\" (UniqueName: \"kubernetes.io/projected/b65d841a-9846-425f-b276-4a75e065c2c8-kube-api-access-s9s4t\") pod \"calico-apiserver-66d8498c4f-r6tfl\" (UID: \"b65d841a-9846-425f-b276-4a75e065c2c8\") " pod="calico-apiserver/calico-apiserver-66d8498c4f-r6tfl" Jan 17 12:29:34.630624 kubelet[3028]: I0117 12:29:34.630408 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwwrg\" (UniqueName: \"kubernetes.io/projected/acf7efb0-d6d1-45db-b95d-bc14baf6f897-kube-api-access-qwwrg\") pod \"coredns-76f75df574-p8c8v\" (UID: \"acf7efb0-d6d1-45db-b95d-bc14baf6f897\") " pod="kube-system/coredns-76f75df574-p8c8v" Jan 17 12:29:34.631442 kubelet[3028]: I0117 12:29:34.630509 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c72f34c0-75e3-46f8-899b-6f07ace470c9-calico-apiserver-certs\") pod \"calico-apiserver-66d8498c4f-4mtqh\" (UID: \"c72f34c0-75e3-46f8-899b-6f07ace470c9\") " pod="calico-apiserver/calico-apiserver-66d8498c4f-4mtqh" Jan 17 12:29:34.631442 kubelet[3028]: I0117 12:29:34.630546 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhbhf\" (UniqueName: \"kubernetes.io/projected/c72f34c0-75e3-46f8-899b-6f07ace470c9-kube-api-access-bhbhf\") pod \"calico-apiserver-66d8498c4f-4mtqh\" (UID: \"c72f34c0-75e3-46f8-899b-6f07ace470c9\") " pod="calico-apiserver/calico-apiserver-66d8498c4f-4mtqh" Jan 17 12:29:34.631442 kubelet[3028]: I0117 12:29:34.630584 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a63cf5e-1b3a-4788-8363-8dde4df3c8b8-tigera-ca-bundle\") pod \"calico-kube-controllers-5bbfc69fcf-t5lf6\" (UID: \"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8\") " pod="calico-system/calico-kube-controllers-5bbfc69fcf-t5lf6" Jan 17 12:29:34.866800 containerd[1621]: time="2025-01-17T12:29:34.866749633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cgsrh,Uid:119d39cc-29db-4e21-9625-748b3f95dac1,Namespace:kube-system,Attempt:0,}" Jan 17 12:29:34.869450 containerd[1621]: time="2025-01-17T12:29:34.869378904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8498c4f-4mtqh,Uid:c72f34c0-75e3-46f8-899b-6f07ace470c9,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:29:34.871369 containerd[1621]: time="2025-01-17T12:29:34.871160796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p8c8v,Uid:acf7efb0-d6d1-45db-b95d-bc14baf6f897,Namespace:kube-system,Attempt:0,}" Jan 17 12:29:34.874495 containerd[1621]: time="2025-01-17T12:29:34.874456747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8498c4f-r6tfl,Uid:b65d841a-9846-425f-b276-4a75e065c2c8,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:29:34.876897 containerd[1621]: time="2025-01-17T12:29:34.876846209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bbfc69fcf-t5lf6,Uid:5a63cf5e-1b3a-4788-8363-8dde4df3c8b8,Namespace:calico-system,Attempt:0,}" Jan 17 12:29:35.266086 containerd[1621]: time="2025-01-17T12:29:35.265959109Z" level=error msg="Failed to destroy network for sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.273529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635-shm.mount: Deactivated successfully. Jan 17 12:29:35.295849 containerd[1621]: time="2025-01-17T12:29:35.295803884Z" level=error msg="Failed to destroy network for sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.296152 containerd[1621]: time="2025-01-17T12:29:35.296101412Z" level=error msg="Failed to destroy network for sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.297093 containerd[1621]: time="2025-01-17T12:29:35.296764135Z" level=error msg="Failed to destroy network for sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.299223 containerd[1621]: time="2025-01-17T12:29:35.298353185Z" level=error msg="encountered an error cleaning up failed sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.302195 containerd[1621]: time="2025-01-17T12:29:35.299562383Z" level=error msg="encountered an error cleaning up failed sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.301891 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672-shm.mount: Deactivated successfully. Jan 17 12:29:35.302066 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522-shm.mount: Deactivated successfully. Jan 17 12:29:35.302246 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d-shm.mount: Deactivated successfully. Jan 17 12:29:35.306991 containerd[1621]: time="2025-01-17T12:29:35.306962462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bbfc69fcf-t5lf6,Uid:5a63cf5e-1b3a-4788-8363-8dde4df3c8b8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.308128 containerd[1621]: time="2025-01-17T12:29:35.307865436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8498c4f-r6tfl,Uid:b65d841a-9846-425f-b276-4a75e065c2c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.311518 containerd[1621]: time="2025-01-17T12:29:35.311486759Z" level=error msg="encountered an error cleaning up failed sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.311649 containerd[1621]: time="2025-01-17T12:29:35.311626921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cgsrh,Uid:119d39cc-29db-4e21-9625-748b3f95dac1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.311751 containerd[1621]: time="2025-01-17T12:29:35.311731137Z" level=error msg="encountered an error cleaning up failed sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.311864 containerd[1621]: time="2025-01-17T12:29:35.311841203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p8c8v,Uid:acf7efb0-d6d1-45db-b95d-bc14baf6f897,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.312012 containerd[1621]: time="2025-01-17T12:29:35.311992206Z" level=error msg="Failed to destroy network for sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.313320 kubelet[3028]: E0117 12:29:35.313289 3028 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.313633 kubelet[3028]: E0117 12:29:35.313351 3028 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-p8c8v" Jan 17 12:29:35.313633 kubelet[3028]: E0117 12:29:35.313371 3028 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-p8c8v" Jan 17 12:29:35.313633 kubelet[3028]: E0117 12:29:35.313419 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-p8c8v_kube-system(acf7efb0-d6d1-45db-b95d-bc14baf6f897)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-p8c8v_kube-system(acf7efb0-d6d1-45db-b95d-bc14baf6f897)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-p8c8v" podUID="acf7efb0-d6d1-45db-b95d-bc14baf6f897" Jan 17 12:29:35.315199 containerd[1621]: time="2025-01-17T12:29:35.313868846Z" level=error msg="encountered an error cleaning up failed sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.315250 kubelet[3028]: E0117 12:29:35.315120 3028 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.315250 kubelet[3028]: E0117 12:29:35.315157 3028 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bbfc69fcf-t5lf6" Jan 17 12:29:35.315250 kubelet[3028]: E0117 12:29:35.315197 3028 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bbfc69fcf-t5lf6" Jan 17 12:29:35.315356 kubelet[3028]: E0117 12:29:35.315236 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bbfc69fcf-t5lf6_calico-system(5a63cf5e-1b3a-4788-8363-8dde4df3c8b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bbfc69fcf-t5lf6_calico-system(5a63cf5e-1b3a-4788-8363-8dde4df3c8b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bbfc69fcf-t5lf6" podUID="5a63cf5e-1b3a-4788-8363-8dde4df3c8b8" Jan 17 12:29:35.315356 kubelet[3028]: E0117 12:29:35.315273 3028 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.315356 kubelet[3028]: E0117 12:29:35.315290 3028 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cgsrh" Jan 17 12:29:35.315772 kubelet[3028]: E0117 12:29:35.315304 3028 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cgsrh" Jan 17 12:29:35.315772 kubelet[3028]: E0117 12:29:35.315332 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cgsrh_kube-system(119d39cc-29db-4e21-9625-748b3f95dac1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cgsrh_kube-system(119d39cc-29db-4e21-9625-748b3f95dac1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cgsrh" podUID="119d39cc-29db-4e21-9625-748b3f95dac1" Jan 17 12:29:35.315772 kubelet[3028]: E0117 12:29:35.315353 3028 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.315671 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319-shm.mount: Deactivated successfully. Jan 17 12:29:35.315977 containerd[1621]: time="2025-01-17T12:29:35.315527878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8498c4f-4mtqh,Uid:c72f34c0-75e3-46f8-899b-6f07ace470c9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.316064 kubelet[3028]: E0117 12:29:35.315368 3028 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d8498c4f-r6tfl" Jan 17 12:29:35.316064 kubelet[3028]: E0117 12:29:35.315381 3028 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d8498c4f-r6tfl" Jan 17 12:29:35.316064 kubelet[3028]: E0117 12:29:35.315405 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66d8498c4f-r6tfl_calico-apiserver(b65d841a-9846-425f-b276-4a75e065c2c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66d8498c4f-r6tfl_calico-apiserver(b65d841a-9846-425f-b276-4a75e065c2c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d8498c4f-r6tfl" podUID="b65d841a-9846-425f-b276-4a75e065c2c8" Jan 17 12:29:35.316241 kubelet[3028]: E0117 12:29:35.316206 3028 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.316282 kubelet[3028]: E0117 12:29:35.316236 3028 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d8498c4f-4mtqh" Jan 17 12:29:35.316312 kubelet[3028]: E0117 12:29:35.316284 3028 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d8498c4f-4mtqh" Jan 17 12:29:35.316334 kubelet[3028]: E0117 12:29:35.316316 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66d8498c4f-4mtqh_calico-apiserver(c72f34c0-75e3-46f8-899b-6f07ace470c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66d8498c4f-4mtqh_calico-apiserver(c72f34c0-75e3-46f8-899b-6f07ace470c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d8498c4f-4mtqh" podUID="c72f34c0-75e3-46f8-899b-6f07ace470c9" Jan 17 12:29:35.326069 kubelet[3028]: I0117 12:29:35.326018 3028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:29:35.328750 kubelet[3028]: I0117 12:29:35.328682 3028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:29:35.329859 kubelet[3028]: I0117 12:29:35.329761 3028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:29:35.336474 containerd[1621]: time="2025-01-17T12:29:35.336186063Z" level=info msg="StopPodSandbox for \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\"" Jan 17 12:29:35.336866 containerd[1621]: time="2025-01-17T12:29:35.336483791Z" level=info msg="StopPodSandbox for \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\"" Jan 17 12:29:35.337449 containerd[1621]: time="2025-01-17T12:29:35.337327013Z" level=info msg="Ensure that sandbox 6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522 in task-service has been cleanup successfully" Jan 17 12:29:35.338116 containerd[1621]: time="2025-01-17T12:29:35.337539962Z" level=info msg="Ensure that sandbox a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319 in task-service has been cleanup successfully" Jan 17 12:29:35.339239 containerd[1621]: time="2025-01-17T12:29:35.338715607Z" level=info msg="StopPodSandbox for \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\"" Jan 17 12:29:35.339239 containerd[1621]: time="2025-01-17T12:29:35.338935029Z" level=info msg="Ensure that sandbox df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635 in task-service has been cleanup successfully" Jan 17 12:29:35.347546 containerd[1621]: time="2025-01-17T12:29:35.347497358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:29:35.348272 kubelet[3028]: I0117 12:29:35.348250 3028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:29:35.351491 containerd[1621]: time="2025-01-17T12:29:35.351408092Z" level=info msg="StopPodSandbox for \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\"" Jan 17 12:29:35.352066 containerd[1621]: time="2025-01-17T12:29:35.351673520Z" level=info msg="Ensure that sandbox 269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d in task-service has been cleanup successfully" Jan 17 12:29:35.362892 kubelet[3028]: I0117 12:29:35.362868 3028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:29:35.365736 containerd[1621]: time="2025-01-17T12:29:35.365588609Z" level=info msg="StopPodSandbox for \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\"" Jan 17 12:29:35.365736 containerd[1621]: time="2025-01-17T12:29:35.365737919Z" level=info msg="Ensure that sandbox 22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672 in task-service has been cleanup successfully" Jan 17 12:29:35.422448 containerd[1621]: time="2025-01-17T12:29:35.422310454Z" level=error msg="StopPodSandbox for \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\" failed" error="failed to destroy network for sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.422827 kubelet[3028]: E0117 12:29:35.422520 3028 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:29:35.422827 kubelet[3028]: E0117 12:29:35.422601 3028 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d"} Jan 17 12:29:35.422827 kubelet[3028]: E0117 12:29:35.422748 3028 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"119d39cc-29db-4e21-9625-748b3f95dac1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:29:35.422827 kubelet[3028]: E0117 12:29:35.422783 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"119d39cc-29db-4e21-9625-748b3f95dac1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cgsrh" podUID="119d39cc-29db-4e21-9625-748b3f95dac1" Jan 17 12:29:35.427468 containerd[1621]: time="2025-01-17T12:29:35.427436298Z" level=error msg="StopPodSandbox for \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\" failed" error="failed to destroy network for sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.429097 kubelet[3028]: E0117 12:29:35.429070 3028 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:29:35.429163 kubelet[3028]: E0117 12:29:35.429112 3028 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672"} Jan 17 12:29:35.429163 kubelet[3028]: E0117 12:29:35.429144 3028 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:29:35.429283 kubelet[3028]: E0117 12:29:35.429171 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bbfc69fcf-t5lf6" podUID="5a63cf5e-1b3a-4788-8363-8dde4df3c8b8" Jan 17 12:29:35.436136 containerd[1621]: time="2025-01-17T12:29:35.436049853Z" level=error msg="StopPodSandbox for \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\" failed" error="failed to destroy network for sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.436310 kubelet[3028]: E0117 12:29:35.436268 3028 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:29:35.436393 kubelet[3028]: E0117 12:29:35.436324 3028 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522"} Jan 17 12:29:35.436393 kubelet[3028]: E0117 12:29:35.436356 3028 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b65d841a-9846-425f-b276-4a75e065c2c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:29:35.436393 kubelet[3028]: E0117 12:29:35.436392 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b65d841a-9846-425f-b276-4a75e065c2c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d8498c4f-r6tfl" podUID="b65d841a-9846-425f-b276-4a75e065c2c8" Jan 17 12:29:35.438469 containerd[1621]: time="2025-01-17T12:29:35.438418575Z" level=error msg="StopPodSandbox for \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\" failed" error="failed to destroy network for sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.439011 kubelet[3028]: E0117 12:29:35.438987 3028 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:29:35.439057 kubelet[3028]: E0117 12:29:35.439024 3028 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319"} Jan 17 12:29:35.439057 kubelet[3028]: E0117 12:29:35.439052 3028 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c72f34c0-75e3-46f8-899b-6f07ace470c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:29:35.439128 kubelet[3028]: E0117 12:29:35.439079 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c72f34c0-75e3-46f8-899b-6f07ace470c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d8498c4f-4mtqh" podUID="c72f34c0-75e3-46f8-899b-6f07ace470c9" Jan 17 12:29:35.441558 containerd[1621]: time="2025-01-17T12:29:35.441401260Z" level=error msg="StopPodSandbox for \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\" failed" error="failed to destroy network for sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:35.444728 kubelet[3028]: E0117 12:29:35.444502 3028 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:29:35.444728 kubelet[3028]: E0117 12:29:35.444684 3028 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635"} Jan 17 12:29:35.445188 kubelet[3028]: E0117 12:29:35.445050 3028 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"acf7efb0-d6d1-45db-b95d-bc14baf6f897\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:29:35.445450 kubelet[3028]: E0117 12:29:35.445346 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"acf7efb0-d6d1-45db-b95d-bc14baf6f897\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-p8c8v" podUID="acf7efb0-d6d1-45db-b95d-bc14baf6f897" Jan 17 12:29:36.219754 containerd[1621]: time="2025-01-17T12:29:36.219702788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffrv6,Uid:8ce471ce-b9be-46d8-bb3c-6f71b777b182,Namespace:calico-system,Attempt:0,}" Jan 17 12:29:36.292863 containerd[1621]: time="2025-01-17T12:29:36.292814443Z" level=error msg="Failed to destroy network for sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:36.293671 containerd[1621]: time="2025-01-17T12:29:36.293142559Z" level=error msg="encountered an error cleaning up failed sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:36.293671 containerd[1621]: time="2025-01-17T12:29:36.293204855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffrv6,Uid:8ce471ce-b9be-46d8-bb3c-6f71b777b182,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:36.295941 kubelet[3028]: E0117 12:29:36.295346 3028 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:36.295941 kubelet[3028]: E0117 12:29:36.295396 3028 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ffrv6" Jan 17 12:29:36.295941 kubelet[3028]: E0117 12:29:36.295421 3028 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ffrv6" Jan 17 12:29:36.295663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8-shm.mount: Deactivated successfully. Jan 17 12:29:36.297395 kubelet[3028]: E0117 12:29:36.295469 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ffrv6_calico-system(8ce471ce-b9be-46d8-bb3c-6f71b777b182)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ffrv6_calico-system(8ce471ce-b9be-46d8-bb3c-6f71b777b182)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ffrv6" podUID="8ce471ce-b9be-46d8-bb3c-6f71b777b182" Jan 17 12:29:36.365714 kubelet[3028]: I0117 12:29:36.365676 3028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:29:36.366692 containerd[1621]: time="2025-01-17T12:29:36.366661560Z" level=info msg="StopPodSandbox for \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\"" Jan 17 12:29:36.367279 containerd[1621]: time="2025-01-17T12:29:36.367094592Z" level=info msg="Ensure that sandbox f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8 in task-service has been cleanup successfully" Jan 17 12:29:36.397142 containerd[1621]: time="2025-01-17T12:29:36.397086012Z" level=error msg="StopPodSandbox for \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\" failed" error="failed to destroy network for sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:29:36.397413 kubelet[3028]: E0117 12:29:36.397390 3028 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:29:36.397470 kubelet[3028]: E0117 12:29:36.397441 3028 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8"} Jan 17 12:29:36.397511 kubelet[3028]: E0117 12:29:36.397482 3028 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ce471ce-b9be-46d8-bb3c-6f71b777b182\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:29:36.397571 kubelet[3028]: E0117 12:29:36.397514 3028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ce471ce-b9be-46d8-bb3c-6f71b777b182\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ffrv6" podUID="8ce471ce-b9be-46d8-bb3c-6f71b777b182" Jan 17 12:29:39.485627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2612518671.mount: Deactivated successfully. Jan 17 12:29:39.616008 containerd[1621]: time="2025-01-17T12:29:39.615157041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:29:39.628859 containerd[1621]: time="2025-01-17T12:29:39.628678622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:39.629955 containerd[1621]: time="2025-01-17T12:29:39.629475096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.269665026s" Jan 17 12:29:39.629955 containerd[1621]: time="2025-01-17T12:29:39.629512666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:29:39.659324 containerd[1621]: time="2025-01-17T12:29:39.659215628Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:39.659804 containerd[1621]: time="2025-01-17T12:29:39.659759708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:39.709153 containerd[1621]: time="2025-01-17T12:29:39.709091254Z" level=info msg="CreateContainer within sandbox \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:29:39.785380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469080722.mount: Deactivated successfully. Jan 17 12:29:39.821498 containerd[1621]: time="2025-01-17T12:29:39.821424036Z" level=info msg="CreateContainer within sandbox \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\"" Jan 17 12:29:39.831918 containerd[1621]: time="2025-01-17T12:29:39.831838400Z" level=info msg="StartContainer for \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\"" Jan 17 12:29:39.946869 containerd[1621]: time="2025-01-17T12:29:39.946752052Z" level=info msg="StartContainer for \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\" returns successfully" Jan 17 12:29:40.078063 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:29:40.080499 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:29:40.239228 systemd-journald[1169]: Under memory pressure, flushing caches. Jan 17 12:29:40.238286 systemd-resolved[1506]: Under memory pressure, flushing caches. Jan 17 12:29:40.238371 systemd-resolved[1506]: Flushed all caches. Jan 17 12:29:40.430913 kubelet[3028]: I0117 12:29:40.430863 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-474vg" podStartSLOduration=1.9699578610000001 podStartE2EDuration="15.409934645s" podCreationTimestamp="2025-01-17 12:29:25 +0000 UTC" firstStartedPulling="2025-01-17 12:29:26.189769871 +0000 UTC m=+23.101801322" lastFinishedPulling="2025-01-17 12:29:39.629746655 +0000 UTC m=+36.541778106" observedRunningTime="2025-01-17 12:29:40.408335967 +0000 UTC m=+37.320367428" watchObservedRunningTime="2025-01-17 12:29:40.409934645 +0000 UTC m=+37.321966095" Jan 17 12:29:41.395689 kubelet[3028]: I0117 12:29:41.395637 3028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:29:42.286585 systemd-resolved[1506]: Under memory pressure, flushing caches. Jan 17 12:29:42.286593 systemd-resolved[1506]: Flushed all caches. Jan 17 12:29:42.288212 systemd-journald[1169]: Under memory pressure, flushing caches. Jan 17 12:29:47.219282 containerd[1621]: time="2025-01-17T12:29:47.219213994Z" level=info msg="StopPodSandbox for \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\"" Jan 17 12:29:47.220521 containerd[1621]: time="2025-01-17T12:29:47.219440119Z" level=info msg="StopPodSandbox for \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\"" Jan 17 12:29:47.221921 containerd[1621]: time="2025-01-17T12:29:47.220938029Z" level=info msg="StopPodSandbox for \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\"" Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.305 [INFO][4411] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.305 [INFO][4411] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" iface="eth0" netns="/var/run/netns/cni-df6eafe3-10b5-d3f4-265d-8105759e7899" Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.308 [INFO][4411] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" iface="eth0" netns="/var/run/netns/cni-df6eafe3-10b5-d3f4-265d-8105759e7899" Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.308 [INFO][4411] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" iface="eth0" netns="/var/run/netns/cni-df6eafe3-10b5-d3f4-265d-8105759e7899" Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.308 [INFO][4411] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.308 [INFO][4411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.455 [INFO][4433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" HandleID="k8s-pod-network.269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.456 [INFO][4433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.456 [INFO][4433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.468 [WARNING][4433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" HandleID="k8s-pod-network.269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.468 [INFO][4433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" HandleID="k8s-pod-network.269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.470 [INFO][4433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:47.475679 containerd[1621]: 2025-01-17 12:29:47.472 [INFO][4411] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:29:47.479774 systemd[1]: run-netns-cni\x2ddf6eafe3\x2d10b5\x2dd3f4\x2d265d\x2d8105759e7899.mount: Deactivated successfully. Jan 17 12:29:47.491920 containerd[1621]: time="2025-01-17T12:29:47.491856087Z" level=info msg="TearDown network for sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\" successfully" Jan 17 12:29:47.491920 containerd[1621]: time="2025-01-17T12:29:47.491895511Z" level=info msg="StopPodSandbox for \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\" returns successfully" Jan 17 12:29:47.494771 containerd[1621]: time="2025-01-17T12:29:47.494397013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cgsrh,Uid:119d39cc-29db-4e21-9625-748b3f95dac1,Namespace:kube-system,Attempt:1,}" Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.308 [INFO][4410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.308 [INFO][4410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" iface="eth0" netns="/var/run/netns/cni-1aa3f10b-97ca-ca8a-8f3d-83c1666fc827" Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.309 [INFO][4410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" iface="eth0" netns="/var/run/netns/cni-1aa3f10b-97ca-ca8a-8f3d-83c1666fc827" Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.309 [INFO][4410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" iface="eth0" netns="/var/run/netns/cni-1aa3f10b-97ca-ca8a-8f3d-83c1666fc827" Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.309 [INFO][4410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.309 [INFO][4410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.455 [INFO][4434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" HandleID="k8s-pod-network.f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.456 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.470 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.474 [WARNING][4434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" HandleID="k8s-pod-network.f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.474 [INFO][4434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" HandleID="k8s-pod-network.f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.477 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:47.495308 containerd[1621]: 2025-01-17 12:29:47.491 [INFO][4410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:29:47.495783 containerd[1621]: time="2025-01-17T12:29:47.495558372Z" level=info msg="TearDown network for sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\" successfully" Jan 17 12:29:47.497517 containerd[1621]: time="2025-01-17T12:29:47.495574492Z" level=info msg="StopPodSandbox for \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\" returns successfully" Jan 17 12:29:47.499430 containerd[1621]: time="2025-01-17T12:29:47.498898797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffrv6,Uid:8ce471ce-b9be-46d8-bb3c-6f71b777b182,Namespace:calico-system,Attempt:1,}" Jan 17 12:29:47.500371 systemd[1]: run-netns-cni\x2d1aa3f10b\x2d97ca\x2dca8a\x2d8f3d\x2d83c1666fc827.mount: Deactivated successfully. Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.304 [INFO][4416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.305 [INFO][4416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" iface="eth0" netns="/var/run/netns/cni-333bbdf0-8f55-53f1-0f3b-810f7409a9ca" Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.305 [INFO][4416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" iface="eth0" netns="/var/run/netns/cni-333bbdf0-8f55-53f1-0f3b-810f7409a9ca" Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.307 [INFO][4416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" iface="eth0" netns="/var/run/netns/cni-333bbdf0-8f55-53f1-0f3b-810f7409a9ca" Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.307 [INFO][4416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.307 [INFO][4416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.456 [INFO][4432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" HandleID="k8s-pod-network.a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.456 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.477 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.487 [WARNING][4432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" HandleID="k8s-pod-network.a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.487 [INFO][4432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" HandleID="k8s-pod-network.a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.490 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:47.501879 containerd[1621]: 2025-01-17 12:29:47.496 [INFO][4416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:29:47.502637 containerd[1621]: time="2025-01-17T12:29:47.502288846Z" level=info msg="TearDown network for sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\" successfully" Jan 17 12:29:47.502637 containerd[1621]: time="2025-01-17T12:29:47.502340994Z" level=info msg="StopPodSandbox for \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\" returns successfully" Jan 17 12:29:47.504484 containerd[1621]: time="2025-01-17T12:29:47.504455108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8498c4f-4mtqh,Uid:c72f34c0-75e3-46f8-899b-6f07ace470c9,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:29:47.507006 systemd[1]: run-netns-cni\x2d333bbdf0\x2d8f55\x2d53f1\x2d0f3b\x2d810f7409a9ca.mount: Deactivated successfully. Jan 17 12:29:47.696573 systemd-networkd[1250]: calic8c14f76672: Link UP Jan 17 12:29:47.698116 systemd-networkd[1250]: calic8c14f76672: Gained carrier Jan 17 12:29:47.722158 systemd-networkd[1250]: cali44719db88d9: Link UP Jan 17 12:29:47.722700 systemd-networkd[1250]: cali44719db88d9: Gained carrier Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.580 [INFO][4461] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.593 [INFO][4461] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0 csi-node-driver- calico-system 8ce471ce-b9be-46d8-bb3c-6f71b777b182 766 0 2025-01-17 12:29:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-6-80d8e78ae3 csi-node-driver-ffrv6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic8c14f76672 [] []}} ContainerID="583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" Namespace="calico-system" Pod="csi-node-driver-ffrv6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.593 [INFO][4461] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" Namespace="calico-system" Pod="csi-node-driver-ffrv6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.633 [INFO][4488] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" HandleID="k8s-pod-network.583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.642 [INFO][4488] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" HandleID="k8s-pod-network.583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003196f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-6-80d8e78ae3", "pod":"csi-node-driver-ffrv6", "timestamp":"2025-01-17 12:29:47.633447018 +0000 UTC"}, Hostname:"ci-4081-3-0-6-80d8e78ae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.642 [INFO][4488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.643 [INFO][4488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.643 [INFO][4488] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-80d8e78ae3' Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.645 [INFO][4488] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.654 [INFO][4488] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.659 [INFO][4488] ipam/ipam.go 489: Trying affinity for 192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.661 [INFO][4488] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.663 [INFO][4488] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.663 [INFO][4488] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.664 [INFO][4488] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171 Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.670 [INFO][4488] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.677 [INFO][4488] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.193/26] block=192.168.73.192/26 handle="k8s-pod-network.583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.677 [INFO][4488] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.193/26] handle="k8s-pod-network.583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.678 [INFO][4488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:47.723579 containerd[1621]: 2025-01-17 12:29:47.678 [INFO][4488] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.193/26] IPv6=[] ContainerID="583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" HandleID="k8s-pod-network.583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:29:47.724659 containerd[1621]: 2025-01-17 12:29:47.680 [INFO][4461] cni-plugin/k8s.go 386: Populated endpoint ContainerID="583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" Namespace="calico-system" Pod="csi-node-driver-ffrv6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ce471ce-b9be-46d8-bb3c-6f71b777b182", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"", Pod:"csi-node-driver-ffrv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8c14f76672", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:47.724659 containerd[1621]: 2025-01-17 12:29:47.681 [INFO][4461] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.193/32] ContainerID="583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" Namespace="calico-system" Pod="csi-node-driver-ffrv6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:29:47.724659 containerd[1621]: 2025-01-17 12:29:47.681 [INFO][4461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8c14f76672 ContainerID="583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" Namespace="calico-system" Pod="csi-node-driver-ffrv6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:29:47.724659 containerd[1621]: 2025-01-17 12:29:47.698 [INFO][4461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" Namespace="calico-system" Pod="csi-node-driver-ffrv6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:29:47.724659 containerd[1621]: 2025-01-17 12:29:47.699 [INFO][4461] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" Namespace="calico-system" Pod="csi-node-driver-ffrv6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ce471ce-b9be-46d8-bb3c-6f71b777b182", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171", Pod:"csi-node-driver-ffrv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8c14f76672", MAC:"8e:e9:c2:77:39:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:47.724659 containerd[1621]: 2025-01-17 12:29:47.716 [INFO][4461] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171" Namespace="calico-system" Pod="csi-node-driver-ffrv6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.578 [INFO][4453] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.593 [INFO][4453] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0 coredns-76f75df574- kube-system 119d39cc-29db-4e21-9625-748b3f95dac1 767 0 2025-01-17 12:29:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-6-80d8e78ae3 coredns-76f75df574-cgsrh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali44719db88d9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" Namespace="kube-system" Pod="coredns-76f75df574-cgsrh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.593 [INFO][4453] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" Namespace="kube-system" Pod="coredns-76f75df574-cgsrh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.649 [INFO][4489] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" HandleID="k8s-pod-network.27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.661 [INFO][4489] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" HandleID="k8s-pod-network.27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b8f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-6-80d8e78ae3", "pod":"coredns-76f75df574-cgsrh", "timestamp":"2025-01-17 12:29:47.649888587 +0000 UTC"}, Hostname:"ci-4081-3-0-6-80d8e78ae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.661 [INFO][4489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.677 [INFO][4489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.677 [INFO][4489] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-80d8e78ae3' Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.679 [INFO][4489] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.683 [INFO][4489] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.688 [INFO][4489] ipam/ipam.go 489: Trying affinity for 192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.690 [INFO][4489] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.693 [INFO][4489] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.693 [INFO][4489] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.697 [INFO][4489] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6 Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.705 [INFO][4489] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.711 [INFO][4489] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.194/26] block=192.168.73.192/26 handle="k8s-pod-network.27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.712 [INFO][4489] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.194/26] handle="k8s-pod-network.27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.712 [INFO][4489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:47.753768 containerd[1621]: 2025-01-17 12:29:47.713 [INFO][4489] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.194/26] IPv6=[] ContainerID="27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" HandleID="k8s-pod-network.27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:29:47.755252 containerd[1621]: 2025-01-17 12:29:47.718 [INFO][4453] cni-plugin/k8s.go 386: Populated endpoint ContainerID="27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" Namespace="kube-system" Pod="coredns-76f75df574-cgsrh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"119d39cc-29db-4e21-9625-748b3f95dac1", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"", Pod:"coredns-76f75df574-cgsrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44719db88d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:47.755252 containerd[1621]: 2025-01-17 12:29:47.718 [INFO][4453] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.194/32] ContainerID="27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" Namespace="kube-system" Pod="coredns-76f75df574-cgsrh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:29:47.755252 containerd[1621]: 2025-01-17 12:29:47.718 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44719db88d9 ContainerID="27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" Namespace="kube-system" Pod="coredns-76f75df574-cgsrh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:29:47.755252 containerd[1621]: 2025-01-17 12:29:47.722 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" Namespace="kube-system" Pod="coredns-76f75df574-cgsrh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:29:47.755252 containerd[1621]: 2025-01-17 12:29:47.723 [INFO][4453] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" Namespace="kube-system" Pod="coredns-76f75df574-cgsrh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"119d39cc-29db-4e21-9625-748b3f95dac1", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6", Pod:"coredns-76f75df574-cgsrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44719db88d9", MAC:"6a:06:d1:51:af:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:47.755252 containerd[1621]: 2025-01-17 12:29:47.744 [INFO][4453] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6" Namespace="kube-system" Pod="coredns-76f75df574-cgsrh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:29:47.796111 containerd[1621]: time="2025-01-17T12:29:47.781793580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:47.796111 containerd[1621]: time="2025-01-17T12:29:47.781846449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:47.796111 containerd[1621]: time="2025-01-17T12:29:47.781878659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:47.796111 containerd[1621]: time="2025-01-17T12:29:47.782107809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:47.800514 systemd-networkd[1250]: cali9ff768cbefb: Link UP Jan 17 12:29:47.805050 systemd-networkd[1250]: cali9ff768cbefb: Gained carrier Jan 17 12:29:47.838817 containerd[1621]: time="2025-01-17T12:29:47.838558414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:47.840383 containerd[1621]: time="2025-01-17T12:29:47.839161295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:47.840383 containerd[1621]: time="2025-01-17T12:29:47.839881264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:47.843366 containerd[1621]: time="2025-01-17T12:29:47.842512009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.603 [INFO][4475] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.617 [INFO][4475] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0 calico-apiserver-66d8498c4f- calico-apiserver c72f34c0-75e3-46f8-899b-6f07ace470c9 765 0 2025-01-17 12:29:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66d8498c4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-6-80d8e78ae3 calico-apiserver-66d8498c4f-4mtqh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ff768cbefb [] []}} ContainerID="7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-4mtqh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.617 [INFO][4475] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-4mtqh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.666 [INFO][4498] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" HandleID="k8s-pod-network.7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.677 [INFO][4498] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" HandleID="k8s-pod-network.7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319c40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-6-80d8e78ae3", "pod":"calico-apiserver-66d8498c4f-4mtqh", "timestamp":"2025-01-17 12:29:47.666625521 +0000 UTC"}, Hostname:"ci-4081-3-0-6-80d8e78ae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.677 [INFO][4498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.713 [INFO][4498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.713 [INFO][4498] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-80d8e78ae3' Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.728 [INFO][4498] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.748 [INFO][4498] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.759 [INFO][4498] ipam/ipam.go 489: Trying affinity for 192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.761 [INFO][4498] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.763 [INFO][4498] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.763 [INFO][4498] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.765 [INFO][4498] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.771 [INFO][4498] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.782 [INFO][4498] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.195/26] block=192.168.73.192/26 handle="k8s-pod-network.7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.783 [INFO][4498] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.195/26] handle="k8s-pod-network.7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.783 [INFO][4498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:47.851050 containerd[1621]: 2025-01-17 12:29:47.785 [INFO][4498] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.195/26] IPv6=[] ContainerID="7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" HandleID="k8s-pod-network.7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:29:47.852951 containerd[1621]: 2025-01-17 12:29:47.790 [INFO][4475] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-4mtqh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0", GenerateName:"calico-apiserver-66d8498c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c72f34c0-75e3-46f8-899b-6f07ace470c9", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8498c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"", Pod:"calico-apiserver-66d8498c4f-4mtqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ff768cbefb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:47.852951 containerd[1621]: 2025-01-17 12:29:47.791 [INFO][4475] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.195/32] ContainerID="7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-4mtqh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:29:47.852951 containerd[1621]: 2025-01-17 12:29:47.791 [INFO][4475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ff768cbefb ContainerID="7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-4mtqh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:29:47.852951 containerd[1621]: 2025-01-17 12:29:47.804 [INFO][4475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-4mtqh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:29:47.852951 containerd[1621]: 2025-01-17 12:29:47.804 [INFO][4475] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-4mtqh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0", GenerateName:"calico-apiserver-66d8498c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c72f34c0-75e3-46f8-899b-6f07ace470c9", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8498c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c", Pod:"calico-apiserver-66d8498c4f-4mtqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ff768cbefb", MAC:"42:03:66:e4:ab:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:47.852951 containerd[1621]: 2025-01-17 12:29:47.828 [INFO][4475] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-4mtqh" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:29:47.897797 containerd[1621]: time="2025-01-17T12:29:47.897530075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:47.897797 containerd[1621]: time="2025-01-17T12:29:47.897589778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:47.897797 containerd[1621]: time="2025-01-17T12:29:47.897609625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:47.897797 containerd[1621]: time="2025-01-17T12:29:47.897685798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:47.915025 containerd[1621]: time="2025-01-17T12:29:47.914988884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cgsrh,Uid:119d39cc-29db-4e21-9625-748b3f95dac1,Namespace:kube-system,Attempt:1,} returns sandbox id \"27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6\"" Jan 17 12:29:47.915758 containerd[1621]: time="2025-01-17T12:29:47.915660904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffrv6,Uid:8ce471ce-b9be-46d8-bb3c-6f71b777b182,Namespace:calico-system,Attempt:1,} returns sandbox id \"583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171\"" Jan 17 12:29:47.919151 containerd[1621]: time="2025-01-17T12:29:47.919120703Z" level=info msg="CreateContainer within sandbox \"27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:29:47.919482 containerd[1621]: time="2025-01-17T12:29:47.919334213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:29:47.937814 containerd[1621]: time="2025-01-17T12:29:47.937776577Z" level=info msg="CreateContainer within sandbox \"27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a108755b6e08e3326fb62e3f737bd5932e73c37856744177b54abbcfc6f8435c\"" Jan 17 12:29:47.938491 containerd[1621]: time="2025-01-17T12:29:47.938472391Z" level=info msg="StartContainer for \"a108755b6e08e3326fb62e3f737bd5932e73c37856744177b54abbcfc6f8435c\"" Jan 17 12:29:48.025481 containerd[1621]: time="2025-01-17T12:29:48.025355658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8498c4f-4mtqh,Uid:c72f34c0-75e3-46f8-899b-6f07ace470c9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c\"" Jan 17 12:29:48.043844 containerd[1621]: time="2025-01-17T12:29:48.043418458Z" level=info msg="StartContainer for \"a108755b6e08e3326fb62e3f737bd5932e73c37856744177b54abbcfc6f8435c\" returns successfully" Jan 17 12:29:48.217247 containerd[1621]: time="2025-01-17T12:29:48.216793057Z" level=info msg="StopPodSandbox for \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\"" Jan 17 12:29:48.217509 containerd[1621]: time="2025-01-17T12:29:48.217482449Z" level=info msg="StopPodSandbox for \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\"" Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.283 [INFO][4752] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.284 [INFO][4752] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" iface="eth0" netns="/var/run/netns/cni-be444ed1-58ba-2baa-fa40-a3e97f860058" Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.285 [INFO][4752] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" iface="eth0" netns="/var/run/netns/cni-be444ed1-58ba-2baa-fa40-a3e97f860058" Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.285 [INFO][4752] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" iface="eth0" netns="/var/run/netns/cni-be444ed1-58ba-2baa-fa40-a3e97f860058" Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.285 [INFO][4752] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.285 [INFO][4752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.305 [INFO][4766] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" HandleID="k8s-pod-network.df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.306 [INFO][4766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.306 [INFO][4766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.312 [WARNING][4766] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" HandleID="k8s-pod-network.df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.312 [INFO][4766] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" HandleID="k8s-pod-network.df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.313 [INFO][4766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:48.316891 containerd[1621]: 2025-01-17 12:29:48.315 [INFO][4752] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:29:48.318138 containerd[1621]: time="2025-01-17T12:29:48.316968205Z" level=info msg="TearDown network for sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\" successfully" Jan 17 12:29:48.318138 containerd[1621]: time="2025-01-17T12:29:48.316991429Z" level=info msg="StopPodSandbox for \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\" returns successfully" Jan 17 12:29:48.318138 containerd[1621]: time="2025-01-17T12:29:48.317688767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p8c8v,Uid:acf7efb0-d6d1-45db-b95d-bc14baf6f897,Namespace:kube-system,Attempt:1,}" Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.282 [INFO][4751] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.283 [INFO][4751] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" iface="eth0" netns="/var/run/netns/cni-cc9b2099-1ac5-66f8-2c8b-d5033079bb07" Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.285 [INFO][4751] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" iface="eth0" netns="/var/run/netns/cni-cc9b2099-1ac5-66f8-2c8b-d5033079bb07" Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.285 [INFO][4751] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" iface="eth0" netns="/var/run/netns/cni-cc9b2099-1ac5-66f8-2c8b-d5033079bb07" Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.285 [INFO][4751] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.285 [INFO][4751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.309 [INFO][4767] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" HandleID="k8s-pod-network.22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.309 [INFO][4767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.313 [INFO][4767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.318 [WARNING][4767] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" HandleID="k8s-pod-network.22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.318 [INFO][4767] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" HandleID="k8s-pod-network.22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.321 [INFO][4767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:48.325797 containerd[1621]: 2025-01-17 12:29:48.323 [INFO][4751] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:29:48.327300 containerd[1621]: time="2025-01-17T12:29:48.326267278Z" level=info msg="TearDown network for sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\" successfully" Jan 17 12:29:48.327300 containerd[1621]: time="2025-01-17T12:29:48.326285131Z" level=info msg="StopPodSandbox for \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\" returns successfully" Jan 17 12:29:48.327300 containerd[1621]: time="2025-01-17T12:29:48.327062659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bbfc69fcf-t5lf6,Uid:5a63cf5e-1b3a-4788-8363-8dde4df3c8b8,Namespace:calico-system,Attempt:1,}" Jan 17 12:29:48.438432 kubelet[3028]: I0117 12:29:48.438072 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-cgsrh" podStartSLOduration=30.438036714 podStartE2EDuration="30.438036714s" podCreationTimestamp="2025-01-17 12:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:29:48.434218392 +0000 UTC m=+45.346249843" watchObservedRunningTime="2025-01-17 12:29:48.438036714 +0000 UTC m=+45.350068164" Jan 17 12:29:48.474129 systemd-networkd[1250]: calid845aed4293: Link UP Jan 17 12:29:48.475063 systemd-networkd[1250]: calid845aed4293: Gained carrier Jan 17 12:29:48.495722 systemd[1]: run-netns-cni\x2dcc9b2099\x2d1ac5\x2d66f8\x2d2c8b\x2dd5033079bb07.mount: Deactivated successfully. Jan 17 12:29:48.499018 systemd[1]: run-netns-cni\x2dbe444ed1\x2d58ba\x2d2baa\x2dfa40\x2da3e97f860058.mount: Deactivated successfully. Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.353 [INFO][4779] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.367 [INFO][4779] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0 coredns-76f75df574- kube-system acf7efb0-d6d1-45db-b95d-bc14baf6f897 786 0 2025-01-17 12:29:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-6-80d8e78ae3 coredns-76f75df574-p8c8v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid845aed4293 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" Namespace="kube-system" Pod="coredns-76f75df574-p8c8v" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.367 [INFO][4779] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" Namespace="kube-system" Pod="coredns-76f75df574-p8c8v" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.401 [INFO][4802] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" HandleID="k8s-pod-network.f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.418 [INFO][4802] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" HandleID="k8s-pod-network.f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c6d60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-6-80d8e78ae3", "pod":"coredns-76f75df574-p8c8v", "timestamp":"2025-01-17 12:29:48.401347717 +0000 UTC"}, Hostname:"ci-4081-3-0-6-80d8e78ae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.418 [INFO][4802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.418 [INFO][4802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.418 [INFO][4802] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-80d8e78ae3' Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.422 [INFO][4802] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.427 [INFO][4802] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.431 [INFO][4802] ipam/ipam.go 489: Trying affinity for 192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.433 [INFO][4802] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.436 [INFO][4802] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.436 [INFO][4802] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.439 [INFO][4802] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073 Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.449 [INFO][4802] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.461 [INFO][4802] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.196/26] block=192.168.73.192/26 handle="k8s-pod-network.f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.461 [INFO][4802] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.196/26] handle="k8s-pod-network.f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.462 [INFO][4802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:48.512156 containerd[1621]: 2025-01-17 12:29:48.462 [INFO][4802] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.196/26] IPv6=[] ContainerID="f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" HandleID="k8s-pod-network.f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:29:48.512790 containerd[1621]: 2025-01-17 12:29:48.467 [INFO][4779] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" Namespace="kube-system" Pod="coredns-76f75df574-p8c8v" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"acf7efb0-d6d1-45db-b95d-bc14baf6f897", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"", Pod:"coredns-76f75df574-p8c8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid845aed4293", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:48.512790 containerd[1621]: 2025-01-17 12:29:48.467 [INFO][4779] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.196/32] ContainerID="f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" Namespace="kube-system" Pod="coredns-76f75df574-p8c8v" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:29:48.512790 containerd[1621]: 2025-01-17 12:29:48.467 [INFO][4779] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid845aed4293 ContainerID="f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" Namespace="kube-system" Pod="coredns-76f75df574-p8c8v" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:29:48.512790 containerd[1621]: 2025-01-17 12:29:48.478 [INFO][4779] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" Namespace="kube-system" Pod="coredns-76f75df574-p8c8v" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:29:48.512790 containerd[1621]: 2025-01-17 12:29:48.478 [INFO][4779] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" Namespace="kube-system" Pod="coredns-76f75df574-p8c8v" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"acf7efb0-d6d1-45db-b95d-bc14baf6f897", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073", Pod:"coredns-76f75df574-p8c8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid845aed4293", MAC:"0e:a3:af:13:20:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:48.512790 containerd[1621]: 2025-01-17 12:29:48.508 [INFO][4779] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073" Namespace="kube-system" Pod="coredns-76f75df574-p8c8v" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:29:48.536593 systemd-networkd[1250]: cali14cbdf9f0da: Link UP Jan 17 12:29:48.537789 systemd-networkd[1250]: cali14cbdf9f0da: Gained carrier Jan 17 12:29:48.554940 containerd[1621]: time="2025-01-17T12:29:48.554687556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:48.554940 containerd[1621]: time="2025-01-17T12:29:48.554742499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:48.554940 containerd[1621]: time="2025-01-17T12:29:48.554755824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:48.554940 containerd[1621]: time="2025-01-17T12:29:48.554836836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.368 [INFO][4789] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.380 [INFO][4789] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0 calico-kube-controllers-5bbfc69fcf- calico-system 5a63cf5e-1b3a-4788-8363-8dde4df3c8b8 785 0 2025-01-17 12:29:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bbfc69fcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-6-80d8e78ae3 calico-kube-controllers-5bbfc69fcf-t5lf6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali14cbdf9f0da [] []}} ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Namespace="calico-system" Pod="calico-kube-controllers-5bbfc69fcf-t5lf6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.380 [INFO][4789] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Namespace="calico-system" Pod="calico-kube-controllers-5bbfc69fcf-t5lf6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.416 [INFO][4807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.426 [INFO][4807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003be150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-6-80d8e78ae3", "pod":"calico-kube-controllers-5bbfc69fcf-t5lf6", "timestamp":"2025-01-17 12:29:48.416160139 +0000 UTC"}, Hostname:"ci-4081-3-0-6-80d8e78ae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.426 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.461 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.461 [INFO][4807] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-80d8e78ae3' Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.464 [INFO][4807] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.469 [INFO][4807] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.490 [INFO][4807] ipam/ipam.go 489: Trying affinity for 192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.505 [INFO][4807] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.509 [INFO][4807] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.509 [INFO][4807] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.511 [INFO][4807] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4 Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.517 [INFO][4807] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.522 [INFO][4807] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.197/26] block=192.168.73.192/26 handle="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.522 [INFO][4807] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.197/26] handle="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.522 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:48.555092 containerd[1621]: 2025-01-17 12:29:48.522 [INFO][4807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.197/26] IPv6=[] ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:29:48.555537 containerd[1621]: 2025-01-17 12:29:48.530 [INFO][4789] cni-plugin/k8s.go 386: Populated endpoint ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Namespace="calico-system" Pod="calico-kube-controllers-5bbfc69fcf-t5lf6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0", GenerateName:"calico-kube-controllers-5bbfc69fcf-", Namespace:"calico-system", SelfLink:"", UID:"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bbfc69fcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"", Pod:"calico-kube-controllers-5bbfc69fcf-t5lf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali14cbdf9f0da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:48.555537 containerd[1621]: 2025-01-17 12:29:48.530 [INFO][4789] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.197/32] ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Namespace="calico-system" Pod="calico-kube-controllers-5bbfc69fcf-t5lf6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:29:48.555537 containerd[1621]: 2025-01-17 12:29:48.530 [INFO][4789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14cbdf9f0da ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Namespace="calico-system" Pod="calico-kube-controllers-5bbfc69fcf-t5lf6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:29:48.555537 containerd[1621]: 2025-01-17 12:29:48.538 [INFO][4789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Namespace="calico-system" Pod="calico-kube-controllers-5bbfc69fcf-t5lf6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:29:48.555537 containerd[1621]: 2025-01-17 12:29:48.539 [INFO][4789] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Namespace="calico-system" Pod="calico-kube-controllers-5bbfc69fcf-t5lf6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0", GenerateName:"calico-kube-controllers-5bbfc69fcf-", Namespace:"calico-system", SelfLink:"", UID:"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bbfc69fcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4", Pod:"calico-kube-controllers-5bbfc69fcf-t5lf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali14cbdf9f0da", MAC:"8e:00:d9:11:ce:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:48.555537 containerd[1621]: 2025-01-17 12:29:48.551 [INFO][4789] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Namespace="calico-system" Pod="calico-kube-controllers-5bbfc69fcf-t5lf6" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:29:48.590074 containerd[1621]: time="2025-01-17T12:29:48.589999231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:48.590074 containerd[1621]: time="2025-01-17T12:29:48.590039997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:48.590562 containerd[1621]: time="2025-01-17T12:29:48.590062138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:48.590562 containerd[1621]: time="2025-01-17T12:29:48.590327546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:48.628666 containerd[1621]: time="2025-01-17T12:29:48.628571621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p8c8v,Uid:acf7efb0-d6d1-45db-b95d-bc14baf6f897,Namespace:kube-system,Attempt:1,} returns sandbox id \"f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073\"" Jan 17 12:29:48.633579 containerd[1621]: time="2025-01-17T12:29:48.633454299Z" level=info msg="CreateContainer within sandbox \"f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:29:48.643481 containerd[1621]: time="2025-01-17T12:29:48.643450850Z" level=info msg="CreateContainer within sandbox \"f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21071e88b49fc276c59f07e6221efafccd52c8a425850dbae241ed05d9144fe1\"" Jan 17 12:29:48.643914 containerd[1621]: time="2025-01-17T12:29:48.643891906Z" level=info msg="StartContainer for \"21071e88b49fc276c59f07e6221efafccd52c8a425850dbae241ed05d9144fe1\"" Jan 17 12:29:48.671408 containerd[1621]: time="2025-01-17T12:29:48.671329531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bbfc69fcf-t5lf6,Uid:5a63cf5e-1b3a-4788-8363-8dde4df3c8b8,Namespace:calico-system,Attempt:1,} returns sandbox id \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\"" Jan 17 12:29:48.703254 containerd[1621]: time="2025-01-17T12:29:48.703164464Z" level=info msg="StartContainer for \"21071e88b49fc276c59f07e6221efafccd52c8a425850dbae241ed05d9144fe1\" returns successfully" Jan 17 12:29:48.878322 systemd-networkd[1250]: cali44719db88d9: Gained IPv6LL Jan 17 12:29:49.198304 systemd-networkd[1250]: calic8c14f76672: Gained IPv6LL Jan 17 12:29:49.326992 systemd-networkd[1250]: cali9ff768cbefb: Gained IPv6LL Jan 17 12:29:49.480894 kubelet[3028]: I0117 12:29:49.472762 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-p8c8v" podStartSLOduration=31.472724913 podStartE2EDuration="31.472724913s" podCreationTimestamp="2025-01-17 12:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:29:49.454525267 +0000 UTC m=+46.366556717" watchObservedRunningTime="2025-01-17 12:29:49.472724913 +0000 UTC m=+46.384756364" Jan 17 12:29:49.567759 containerd[1621]: time="2025-01-17T12:29:49.567693236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:49.568606 containerd[1621]: time="2025-01-17T12:29:49.568502403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:29:49.569377 containerd[1621]: time="2025-01-17T12:29:49.569328182Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:49.571015 containerd[1621]: time="2025-01-17T12:29:49.570978829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:49.571830 containerd[1621]: time="2025-01-17T12:29:49.571475260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.652019037s" Jan 17 12:29:49.571830 containerd[1621]: time="2025-01-17T12:29:49.571502321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:29:49.572386 containerd[1621]: time="2025-01-17T12:29:49.572363977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:29:49.573612 containerd[1621]: time="2025-01-17T12:29:49.573582593Z" level=info msg="CreateContainer within sandbox \"583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:29:49.589550 containerd[1621]: time="2025-01-17T12:29:49.589500982Z" level=info msg="CreateContainer within sandbox \"583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"deb6ac8175104f85a05e539b9421b325c80844d9eecaaba2b712835e869dcdd1\"" Jan 17 12:29:49.590162 containerd[1621]: time="2025-01-17T12:29:49.590082953Z" level=info msg="StartContainer for \"deb6ac8175104f85a05e539b9421b325c80844d9eecaaba2b712835e869dcdd1\"" Jan 17 12:29:49.671989 containerd[1621]: time="2025-01-17T12:29:49.671936426Z" level=info msg="StartContainer for \"deb6ac8175104f85a05e539b9421b325c80844d9eecaaba2b712835e869dcdd1\" returns successfully" Jan 17 12:29:50.217824 containerd[1621]: time="2025-01-17T12:29:50.217332822Z" level=info msg="StopPodSandbox for \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\"" Jan 17 12:29:50.287420 systemd-networkd[1250]: cali14cbdf9f0da: Gained IPv6LL Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.278 [INFO][5053] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.278 [INFO][5053] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" iface="eth0" netns="/var/run/netns/cni-e16649fd-faa2-9fc4-e3bb-5994307eb9ba" Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.278 [INFO][5053] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" iface="eth0" netns="/var/run/netns/cni-e16649fd-faa2-9fc4-e3bb-5994307eb9ba" Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.278 [INFO][5053] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" iface="eth0" netns="/var/run/netns/cni-e16649fd-faa2-9fc4-e3bb-5994307eb9ba" Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.279 [INFO][5053] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.279 [INFO][5053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.308 [INFO][5064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" HandleID="k8s-pod-network.6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.308 [INFO][5064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.308 [INFO][5064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.313 [WARNING][5064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" HandleID="k8s-pod-network.6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.313 [INFO][5064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" HandleID="k8s-pod-network.6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.314 [INFO][5064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:50.319822 containerd[1621]: 2025-01-17 12:29:50.316 [INFO][5053] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:29:50.320334 containerd[1621]: time="2025-01-17T12:29:50.320098462Z" level=info msg="TearDown network for sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\" successfully" Jan 17 12:29:50.320334 containerd[1621]: time="2025-01-17T12:29:50.320123449Z" level=info msg="StopPodSandbox for \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\" returns successfully" Jan 17 12:29:50.321106 containerd[1621]: time="2025-01-17T12:29:50.321045198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8498c4f-r6tfl,Uid:b65d841a-9846-425f-b276-4a75e065c2c8,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:29:50.414746 systemd-networkd[1250]: cali563063ca3e9: Link UP Jan 17 12:29:50.415653 systemd-networkd[1250]: cali563063ca3e9: Gained carrier Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.347 [INFO][5071] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.357 [INFO][5071] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0 calico-apiserver-66d8498c4f- calico-apiserver b65d841a-9846-425f-b276-4a75e065c2c8 818 0 2025-01-17 12:29:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66d8498c4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-6-80d8e78ae3 calico-apiserver-66d8498c4f-r6tfl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali563063ca3e9 [] []}} ContainerID="6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-r6tfl" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.357 [INFO][5071] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-r6tfl" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.378 [INFO][5082] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" HandleID="k8s-pod-network.6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.385 [INFO][5082] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" HandleID="k8s-pod-network.6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-6-80d8e78ae3", "pod":"calico-apiserver-66d8498c4f-r6tfl", "timestamp":"2025-01-17 12:29:50.378428123 +0000 UTC"}, Hostname:"ci-4081-3-0-6-80d8e78ae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.385 [INFO][5082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.385 [INFO][5082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.385 [INFO][5082] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-80d8e78ae3' Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.386 [INFO][5082] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.389 [INFO][5082] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.392 [INFO][5082] ipam/ipam.go 489: Trying affinity for 192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.394 [INFO][5082] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.396 [INFO][5082] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.396 [INFO][5082] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.397 [INFO][5082] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.403 [INFO][5082] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.408 [INFO][5082] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.198/26] block=192.168.73.192/26 handle="k8s-pod-network.6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.408 [INFO][5082] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.198/26] handle="k8s-pod-network.6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.409 [INFO][5082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:29:50.436220 containerd[1621]: 2025-01-17 12:29:50.409 [INFO][5082] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.198/26] IPv6=[] ContainerID="6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" HandleID="k8s-pod-network.6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:29:50.436769 containerd[1621]: 2025-01-17 12:29:50.412 [INFO][5071] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-r6tfl" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0", GenerateName:"calico-apiserver-66d8498c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b65d841a-9846-425f-b276-4a75e065c2c8", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8498c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"", Pod:"calico-apiserver-66d8498c4f-r6tfl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali563063ca3e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:50.436769 containerd[1621]: 2025-01-17 12:29:50.412 [INFO][5071] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.198/32] ContainerID="6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-r6tfl" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:29:50.436769 containerd[1621]: 2025-01-17 12:29:50.412 [INFO][5071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali563063ca3e9 ContainerID="6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-r6tfl" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:29:50.436769 containerd[1621]: 2025-01-17 12:29:50.416 [INFO][5071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-r6tfl" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:29:50.436769 containerd[1621]: 2025-01-17 12:29:50.417 [INFO][5071] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-r6tfl" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0", GenerateName:"calico-apiserver-66d8498c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b65d841a-9846-425f-b276-4a75e065c2c8", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8498c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d", Pod:"calico-apiserver-66d8498c4f-r6tfl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali563063ca3e9", MAC:"26:b9:f5:38:8c:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:29:50.436769 containerd[1621]: 2025-01-17 12:29:50.430 [INFO][5071] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d" Namespace="calico-apiserver" Pod="calico-apiserver-66d8498c4f-r6tfl" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:29:50.461211 containerd[1621]: time="2025-01-17T12:29:50.460890107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:50.461211 containerd[1621]: time="2025-01-17T12:29:50.460942526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:50.461211 containerd[1621]: time="2025-01-17T12:29:50.460953336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:50.461211 containerd[1621]: time="2025-01-17T12:29:50.461028708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:50.485678 systemd[1]: run-netns-cni\x2de16649fd\x2dfaa2\x2d9fc4\x2de3bb\x2d5994307eb9ba.mount: Deactivated successfully. Jan 17 12:29:50.523199 containerd[1621]: time="2025-01-17T12:29:50.523150370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8498c4f-r6tfl,Uid:b65d841a-9846-425f-b276-4a75e065c2c8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d\"" Jan 17 12:29:50.542402 systemd-networkd[1250]: calid845aed4293: Gained IPv6LL Jan 17 12:29:52.103256 kubelet[3028]: I0117 12:29:52.103216 3028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:29:52.142966 systemd-networkd[1250]: cali563063ca3e9: Gained IPv6LL Jan 17 12:29:52.281704 containerd[1621]: time="2025-01-17T12:29:52.281618856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:52.283848 containerd[1621]: time="2025-01-17T12:29:52.283754772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:29:52.285275 containerd[1621]: time="2025-01-17T12:29:52.284442592Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:52.288211 containerd[1621]: time="2025-01-17T12:29:52.288169142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:52.289203 containerd[1621]: time="2025-01-17T12:29:52.288644444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.716171752s" Jan 17 12:29:52.292454 containerd[1621]: time="2025-01-17T12:29:52.289430077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:29:52.292454 containerd[1621]: time="2025-01-17T12:29:52.291521410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:29:52.293325 containerd[1621]: time="2025-01-17T12:29:52.293269920Z" level=info msg="CreateContainer within sandbox \"7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:29:52.318392 containerd[1621]: time="2025-01-17T12:29:52.318285112Z" level=info msg="CreateContainer within sandbox \"7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a3e2d431bfa9ee3065d274cd2ac3d41613e6b2820f8a62b3d9484da258a54e9f\"" Jan 17 12:29:52.319268 containerd[1621]: time="2025-01-17T12:29:52.319055176Z" level=info msg="StartContainer for \"a3e2d431bfa9ee3065d274cd2ac3d41613e6b2820f8a62b3d9484da258a54e9f\"" Jan 17 12:29:52.489196 containerd[1621]: time="2025-01-17T12:29:52.489135702Z" level=info msg="StartContainer for \"a3e2d431bfa9ee3065d274cd2ac3d41613e6b2820f8a62b3d9484da258a54e9f\" returns successfully" Jan 17 12:29:53.035323 kernel: bpftool[5266]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:29:53.323164 systemd-networkd[1250]: vxlan.calico: Link UP Jan 17 12:29:53.323171 systemd-networkd[1250]: vxlan.calico: Gained carrier Jan 17 12:29:54.065452 kubelet[3028]: I0117 12:29:54.065390 3028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:29:54.475160 kubelet[3028]: I0117 12:29:54.475085 3028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:29:54.901518 containerd[1621]: time="2025-01-17T12:29:54.901465186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:54.903193 containerd[1621]: time="2025-01-17T12:29:54.903131932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:29:54.904373 containerd[1621]: time="2025-01-17T12:29:54.904301105Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:54.906570 containerd[1621]: time="2025-01-17T12:29:54.906522581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:54.907786 containerd[1621]: time="2025-01-17T12:29:54.907402092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.615851877s" Jan 17 12:29:54.907786 containerd[1621]: time="2025-01-17T12:29:54.907443480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:29:54.908668 containerd[1621]: time="2025-01-17T12:29:54.908583738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:29:54.935863 containerd[1621]: time="2025-01-17T12:29:54.935809747Z" level=info msg="CreateContainer within sandbox \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:29:54.953021 containerd[1621]: time="2025-01-17T12:29:54.952979494Z" level=info msg="CreateContainer within sandbox \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\"" Jan 17 12:29:54.953606 containerd[1621]: time="2025-01-17T12:29:54.953571464Z" level=info msg="StartContainer for \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\"" Jan 17 12:29:55.020575 containerd[1621]: time="2025-01-17T12:29:55.020392774Z" level=info msg="StartContainer for \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\" returns successfully" Jan 17 12:29:55.140317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461918924.mount: Deactivated successfully. Jan 17 12:29:55.342374 systemd-networkd[1250]: vxlan.calico: Gained IPv6LL Jan 17 12:29:55.498817 kubelet[3028]: I0117 12:29:55.497785 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66d8498c4f-4mtqh" podStartSLOduration=25.236566774 podStartE2EDuration="29.497739231s" podCreationTimestamp="2025-01-17 12:29:26 +0000 UTC" firstStartedPulling="2025-01-17 12:29:48.0286464 +0000 UTC m=+44.940677851" lastFinishedPulling="2025-01-17 12:29:52.289818856 +0000 UTC m=+49.201850308" observedRunningTime="2025-01-17 12:29:53.490425962 +0000 UTC m=+50.402457413" watchObservedRunningTime="2025-01-17 12:29:55.497739231 +0000 UTC m=+52.409770682" Jan 17 12:29:55.552449 kubelet[3028]: I0117 12:29:55.551552 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bbfc69fcf-t5lf6" podStartSLOduration=24.316090876 podStartE2EDuration="30.551517034s" podCreationTimestamp="2025-01-17 12:29:25 +0000 UTC" firstStartedPulling="2025-01-17 12:29:48.672364191 +0000 UTC m=+45.584395643" lastFinishedPulling="2025-01-17 12:29:54.90779035 +0000 UTC m=+51.819821801" observedRunningTime="2025-01-17 12:29:55.498433122 +0000 UTC m=+52.410464584" watchObservedRunningTime="2025-01-17 12:29:55.551517034 +0000 UTC m=+52.463548486" Jan 17 12:29:56.283280 containerd[1621]: time="2025-01-17T12:29:56.283146327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:56.284134 containerd[1621]: time="2025-01-17T12:29:56.283860546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:29:56.284739 containerd[1621]: time="2025-01-17T12:29:56.284685154Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:56.286600 containerd[1621]: time="2025-01-17T12:29:56.286559278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:56.287073 containerd[1621]: time="2025-01-17T12:29:56.287037736Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.37841771s" Jan 17 12:29:56.287121 containerd[1621]: time="2025-01-17T12:29:56.287072782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:29:56.288300 containerd[1621]: time="2025-01-17T12:29:56.288264848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:29:56.289419 containerd[1621]: time="2025-01-17T12:29:56.289377315Z" level=info msg="CreateContainer within sandbox \"583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:29:56.304836 containerd[1621]: time="2025-01-17T12:29:56.304799653Z" level=info msg="CreateContainer within sandbox \"583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"308def731a05ee0d0562aec556d9911f0558b5ea4454b322175637a85de84b48\"" Jan 17 12:29:56.310739 containerd[1621]: time="2025-01-17T12:29:56.310701414Z" level=info msg="StartContainer for \"308def731a05ee0d0562aec556d9911f0558b5ea4454b322175637a85de84b48\"" Jan 17 12:29:56.378771 containerd[1621]: time="2025-01-17T12:29:56.378732975Z" level=info msg="StartContainer for \"308def731a05ee0d0562aec556d9911f0558b5ea4454b322175637a85de84b48\" returns successfully" Jan 17 12:29:56.704072 containerd[1621]: time="2025-01-17T12:29:56.703995932Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:56.705033 containerd[1621]: time="2025-01-17T12:29:56.704863158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:29:56.708326 containerd[1621]: time="2025-01-17T12:29:56.708271311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 419.897931ms" Jan 17 12:29:56.708326 containerd[1621]: time="2025-01-17T12:29:56.708322698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:29:56.713208 containerd[1621]: time="2025-01-17T12:29:56.712739583Z" level=info msg="CreateContainer within sandbox \"6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:29:56.729560 containerd[1621]: time="2025-01-17T12:29:56.728926886Z" level=info msg="CreateContainer within sandbox \"6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b5c06032d06f6b3a2297b422ca5307f5f15f3c79fbf7a755b0d43e09d80786f\"" Jan 17 12:29:56.730276 containerd[1621]: time="2025-01-17T12:29:56.729918788Z" level=info msg="StartContainer for \"5b5c06032d06f6b3a2297b422ca5307f5f15f3c79fbf7a755b0d43e09d80786f\"" Jan 17 12:29:56.734269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193994020.mount: Deactivated successfully. Jan 17 12:29:56.805529 containerd[1621]: time="2025-01-17T12:29:56.805496483Z" level=info msg="StartContainer for \"5b5c06032d06f6b3a2297b422ca5307f5f15f3c79fbf7a755b0d43e09d80786f\" returns successfully" Jan 17 12:29:57.460059 kubelet[3028]: I0117 12:29:57.460006 3028 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:29:57.461052 kubelet[3028]: I0117 12:29:57.461022 3028 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:29:57.532692 kubelet[3028]: I0117 12:29:57.529711 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-ffrv6" podStartSLOduration=24.160654323 podStartE2EDuration="32.529657473s" podCreationTimestamp="2025-01-17 12:29:25 +0000 UTC" firstStartedPulling="2025-01-17 12:29:47.918402736 +0000 UTC m=+44.830434188" lastFinishedPulling="2025-01-17 12:29:56.287405887 +0000 UTC m=+53.199437338" observedRunningTime="2025-01-17 12:29:56.501628719 +0000 UTC m=+53.413660199" watchObservedRunningTime="2025-01-17 12:29:57.529657473 +0000 UTC m=+54.441688934" Jan 17 12:29:58.521330 kubelet[3028]: I0117 12:29:58.521257 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66d8498c4f-r6tfl" podStartSLOduration=26.335626305 podStartE2EDuration="32.519863278s" podCreationTimestamp="2025-01-17 12:29:26 +0000 UTC" firstStartedPulling="2025-01-17 12:29:50.524341335 +0000 UTC m=+47.436372786" lastFinishedPulling="2025-01-17 12:29:56.708578267 +0000 UTC m=+53.620609759" observedRunningTime="2025-01-17 12:29:57.532874237 +0000 UTC m=+54.444905688" watchObservedRunningTime="2025-01-17 12:29:58.519863278 +0000 UTC m=+55.431894729" Jan 17 12:30:03.215165 containerd[1621]: time="2025-01-17T12:30:03.215033481Z" level=info msg="StopPodSandbox for \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\"" Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.318 [WARNING][5556] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0", GenerateName:"calico-kube-controllers-5bbfc69fcf-", Namespace:"calico-system", SelfLink:"", UID:"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bbfc69fcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4", Pod:"calico-kube-controllers-5bbfc69fcf-t5lf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali14cbdf9f0da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.319 [INFO][5556] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.319 [INFO][5556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" iface="eth0" netns="" Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.319 [INFO][5556] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.319 [INFO][5556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.343 [INFO][5562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" HandleID="k8s-pod-network.22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.343 [INFO][5562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.343 [INFO][5562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.349 [WARNING][5562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" HandleID="k8s-pod-network.22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.349 [INFO][5562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" HandleID="k8s-pod-network.22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.350 [INFO][5562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:03.356396 containerd[1621]: 2025-01-17 12:30:03.353 [INFO][5556] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:30:03.358262 containerd[1621]: time="2025-01-17T12:30:03.356448700Z" level=info msg="TearDown network for sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\" successfully" Jan 17 12:30:03.358262 containerd[1621]: time="2025-01-17T12:30:03.356517458Z" level=info msg="StopPodSandbox for \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\" returns successfully" Jan 17 12:30:03.358262 containerd[1621]: time="2025-01-17T12:30:03.357271352Z" level=info msg="RemovePodSandbox for \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\"" Jan 17 12:30:03.358998 containerd[1621]: time="2025-01-17T12:30:03.358972744Z" level=info msg="Forcibly stopping sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\"" Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.391 [WARNING][5580] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0", GenerateName:"calico-kube-controllers-5bbfc69fcf-", Namespace:"calico-system", SelfLink:"", UID:"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bbfc69fcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4", Pod:"calico-kube-controllers-5bbfc69fcf-t5lf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali14cbdf9f0da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.391 [INFO][5580] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.391 [INFO][5580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" iface="eth0" netns="" Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.391 [INFO][5580] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.391 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.411 [INFO][5587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" HandleID="k8s-pod-network.22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.411 [INFO][5587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.412 [INFO][5587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.416 [WARNING][5587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" HandleID="k8s-pod-network.22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.416 [INFO][5587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" HandleID="k8s-pod-network.22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.418 [INFO][5587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:03.423243 containerd[1621]: 2025-01-17 12:30:03.420 [INFO][5580] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672" Jan 17 12:30:03.423243 containerd[1621]: time="2025-01-17T12:30:03.423225139Z" level=info msg="TearDown network for sandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\" successfully" Jan 17 12:30:03.443076 containerd[1621]: time="2025-01-17T12:30:03.443022065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:30:03.444048 containerd[1621]: time="2025-01-17T12:30:03.443090252Z" level=info msg="RemovePodSandbox \"22027cca8d72b53e9a4f64d41bfe9ff6bbfe80f661d707370b5e58f29ccbd672\" returns successfully" Jan 17 12:30:03.444048 containerd[1621]: time="2025-01-17T12:30:03.443773464Z" level=info msg="StopPodSandbox for \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\"" Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.478 [WARNING][5607] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ce471ce-b9be-46d8-bb3c-6f71b777b182", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171", Pod:"csi-node-driver-ffrv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8c14f76672", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.478 [INFO][5607] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.478 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" iface="eth0" netns="" Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.478 [INFO][5607] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.478 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.500 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" HandleID="k8s-pod-network.f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.500 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.500 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.505 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" HandleID="k8s-pod-network.f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.505 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" HandleID="k8s-pod-network.f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.506 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:03.512382 containerd[1621]: 2025-01-17 12:30:03.509 [INFO][5607] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:30:03.512382 containerd[1621]: time="2025-01-17T12:30:03.512355912Z" level=info msg="TearDown network for sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\" successfully" Jan 17 12:30:03.512382 containerd[1621]: time="2025-01-17T12:30:03.512378043Z" level=info msg="StopPodSandbox for \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\" returns successfully" Jan 17 12:30:03.513914 containerd[1621]: time="2025-01-17T12:30:03.512889513Z" level=info msg="RemovePodSandbox for \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\"" Jan 17 12:30:03.513914 containerd[1621]: time="2025-01-17T12:30:03.512913778Z" level=info msg="Forcibly stopping sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\"" Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.574 [WARNING][5633] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ce471ce-b9be-46d8-bb3c-6f71b777b182", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"583044cb2b8afb79996054957f71268f991857a13ac24553becc3a423adab171", Pod:"csi-node-driver-ffrv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8c14f76672", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.576 [INFO][5633] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.576 [INFO][5633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" iface="eth0" netns="" Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.578 [INFO][5633] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.578 [INFO][5633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.624 [INFO][5640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" HandleID="k8s-pod-network.f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.625 [INFO][5640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.625 [INFO][5640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.631 [WARNING][5640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" HandleID="k8s-pod-network.f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.631 [INFO][5640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" HandleID="k8s-pod-network.f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-csi--node--driver--ffrv6-eth0" Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.632 [INFO][5640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:03.637413 containerd[1621]: 2025-01-17 12:30:03.635 [INFO][5633] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8" Jan 17 12:30:03.638685 containerd[1621]: time="2025-01-17T12:30:03.637451497Z" level=info msg="TearDown network for sandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\" successfully" Jan 17 12:30:03.640767 containerd[1621]: time="2025-01-17T12:30:03.640723805Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:30:03.640824 containerd[1621]: time="2025-01-17T12:30:03.640784349Z" level=info msg="RemovePodSandbox \"f1e719c4a3986400c2dc4597f5ac56029e4742a93dc6c94cf36b8aef4471d7f8\" returns successfully" Jan 17 12:30:03.641548 containerd[1621]: time="2025-01-17T12:30:03.641292262Z" level=info msg="StopPodSandbox for \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\"" Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.683 [WARNING][5659] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"119d39cc-29db-4e21-9625-748b3f95dac1", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6", Pod:"coredns-76f75df574-cgsrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44719db88d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.684 [INFO][5659] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.684 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" iface="eth0" netns="" Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.684 [INFO][5659] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.684 [INFO][5659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.703 [INFO][5672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" HandleID="k8s-pod-network.269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.703 [INFO][5672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.703 [INFO][5672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.709 [WARNING][5672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" HandleID="k8s-pod-network.269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.709 [INFO][5672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" HandleID="k8s-pod-network.269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.710 [INFO][5672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:03.715424 containerd[1621]: 2025-01-17 12:30:03.712 [INFO][5659] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:30:03.716099 containerd[1621]: time="2025-01-17T12:30:03.715464334Z" level=info msg="TearDown network for sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\" successfully" Jan 17 12:30:03.716099 containerd[1621]: time="2025-01-17T12:30:03.715502176Z" level=info msg="StopPodSandbox for \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\" returns successfully" Jan 17 12:30:03.716657 containerd[1621]: time="2025-01-17T12:30:03.716392395Z" level=info msg="RemovePodSandbox for \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\"" Jan 17 12:30:03.716657 containerd[1621]: time="2025-01-17T12:30:03.716437910Z" level=info msg="Forcibly stopping sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\"" Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.751 [WARNING][5690] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"119d39cc-29db-4e21-9625-748b3f95dac1", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"27a5f01ea9a83d304828c92c66e03736d10f886b51b60da59a5c6c31f769a2e6", Pod:"coredns-76f75df574-cgsrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44719db88d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.751 [INFO][5690] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.751 [INFO][5690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" iface="eth0" netns="" Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.751 [INFO][5690] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.751 [INFO][5690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.772 [INFO][5696] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" HandleID="k8s-pod-network.269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.772 [INFO][5696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.772 [INFO][5696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.777 [WARNING][5696] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" HandleID="k8s-pod-network.269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.777 [INFO][5696] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" HandleID="k8s-pod-network.269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--cgsrh-eth0" Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.780 [INFO][5696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:03.790122 containerd[1621]: 2025-01-17 12:30:03.785 [INFO][5690] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d" Jan 17 12:30:03.790122 containerd[1621]: time="2025-01-17T12:30:03.790096730Z" level=info msg="TearDown network for sandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\" successfully" Jan 17 12:30:03.794484 containerd[1621]: time="2025-01-17T12:30:03.794396396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:30:03.794484 containerd[1621]: time="2025-01-17T12:30:03.794452111Z" level=info msg="RemovePodSandbox \"269fa2802331ce993bf6eebe48af3d5188204ea302bcac67f2751afe84b3e38d\" returns successfully" Jan 17 12:30:03.794986 containerd[1621]: time="2025-01-17T12:30:03.794952049Z" level=info msg="StopPodSandbox for \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\"" Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.832 [WARNING][5714] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0", GenerateName:"calico-apiserver-66d8498c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c72f34c0-75e3-46f8-899b-6f07ace470c9", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8498c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c", Pod:"calico-apiserver-66d8498c4f-4mtqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ff768cbefb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.832 [INFO][5714] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.832 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" iface="eth0" netns="" Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.832 [INFO][5714] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.832 [INFO][5714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.856 [INFO][5721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" HandleID="k8s-pod-network.a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.856 [INFO][5721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.856 [INFO][5721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.861 [WARNING][5721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" HandleID="k8s-pod-network.a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.861 [INFO][5721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" HandleID="k8s-pod-network.a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.863 [INFO][5721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:03.868255 containerd[1621]: 2025-01-17 12:30:03.865 [INFO][5714] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:30:03.869229 containerd[1621]: time="2025-01-17T12:30:03.868272273Z" level=info msg="TearDown network for sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\" successfully" Jan 17 12:30:03.869229 containerd[1621]: time="2025-01-17T12:30:03.868312639Z" level=info msg="StopPodSandbox for \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\" returns successfully" Jan 17 12:30:03.869229 containerd[1621]: time="2025-01-17T12:30:03.868828967Z" level=info msg="RemovePodSandbox for \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\"" Jan 17 12:30:03.869229 containerd[1621]: time="2025-01-17T12:30:03.868858493Z" level=info msg="Forcibly stopping sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\"" Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.901 [WARNING][5739] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0", GenerateName:"calico-apiserver-66d8498c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c72f34c0-75e3-46f8-899b-6f07ace470c9", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8498c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"7d5afb244ec0aff162f1dfe2944349e718e8c32fcb458f044067e55e2aeb265c", Pod:"calico-apiserver-66d8498c4f-4mtqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ff768cbefb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.901 [INFO][5739] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.901 [INFO][5739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" iface="eth0" netns="" Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.901 [INFO][5739] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.901 [INFO][5739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.923 [INFO][5746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" HandleID="k8s-pod-network.a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.923 [INFO][5746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.923 [INFO][5746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.928 [WARNING][5746] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" HandleID="k8s-pod-network.a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.928 [INFO][5746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" HandleID="k8s-pod-network.a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--4mtqh-eth0" Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.929 [INFO][5746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:03.934287 containerd[1621]: 2025-01-17 12:30:03.932 [INFO][5739] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319" Jan 17 12:30:03.935420 containerd[1621]: time="2025-01-17T12:30:03.934430754Z" level=info msg="TearDown network for sandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\" successfully" Jan 17 12:30:03.937962 containerd[1621]: time="2025-01-17T12:30:03.937913537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:30:03.938013 containerd[1621]: time="2025-01-17T12:30:03.937968510Z" level=info msg="RemovePodSandbox \"a68f93ab3723cb2110cf0675c5ac3233f0fc7c5ad7e197cf8750f378ff323319\" returns successfully" Jan 17 12:30:03.938570 containerd[1621]: time="2025-01-17T12:30:03.938446306Z" level=info msg="StopPodSandbox for \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\"" Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.970 [WARNING][5764] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"acf7efb0-d6d1-45db-b95d-bc14baf6f897", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073", Pod:"coredns-76f75df574-p8c8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid845aed4293", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.970 [INFO][5764] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.970 [INFO][5764] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" iface="eth0" netns="" Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.970 [INFO][5764] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.970 [INFO][5764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.988 [INFO][5770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" HandleID="k8s-pod-network.df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.988 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.988 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.994 [WARNING][5770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" HandleID="k8s-pod-network.df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.994 [INFO][5770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" HandleID="k8s-pod-network.df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.995 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:04.001424 containerd[1621]: 2025-01-17 12:30:03.998 [INFO][5764] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:30:04.002695 containerd[1621]: time="2025-01-17T12:30:04.001429310Z" level=info msg="TearDown network for sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\" successfully" Jan 17 12:30:04.002695 containerd[1621]: time="2025-01-17T12:30:04.001453005Z" level=info msg="StopPodSandbox for \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\" returns successfully" Jan 17 12:30:04.002695 containerd[1621]: time="2025-01-17T12:30:04.001924369Z" level=info msg="RemovePodSandbox for \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\"" Jan 17 12:30:04.002695 containerd[1621]: time="2025-01-17T12:30:04.001948724Z" level=info msg="Forcibly stopping sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\"" Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.036 [WARNING][5788] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"acf7efb0-d6d1-45db-b95d-bc14baf6f897", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"f5adcf76176ed40c4183089e4c3e3936502c36057ee22fa1e9424ec292134073", Pod:"coredns-76f75df574-p8c8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid845aed4293", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.036 [INFO][5788] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.036 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" iface="eth0" netns="" Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.036 [INFO][5788] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.036 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.057 [INFO][5794] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" HandleID="k8s-pod-network.df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.057 [INFO][5794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.057 [INFO][5794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.062 [WARNING][5794] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" HandleID="k8s-pod-network.df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.062 [INFO][5794] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" HandleID="k8s-pod-network.df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-coredns--76f75df574--p8c8v-eth0" Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.065 [INFO][5794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:04.070301 containerd[1621]: 2025-01-17 12:30:04.067 [INFO][5788] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635" Jan 17 12:30:04.071014 containerd[1621]: time="2025-01-17T12:30:04.070273439Z" level=info msg="TearDown network for sandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\" successfully" Jan 17 12:30:04.075235 containerd[1621]: time="2025-01-17T12:30:04.075202465Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:30:04.075419 containerd[1621]: time="2025-01-17T12:30:04.075256576Z" level=info msg="RemovePodSandbox \"df4ea16feedd0bb898cbd982a6758252e232ea367349f3e887661cc84b6f4635\" returns successfully" Jan 17 12:30:04.075775 containerd[1621]: time="2025-01-17T12:30:04.075703155Z" level=info msg="StopPodSandbox for \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\"" Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.110 [WARNING][5812] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0", GenerateName:"calico-apiserver-66d8498c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b65d841a-9846-425f-b276-4a75e065c2c8", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8498c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d", Pod:"calico-apiserver-66d8498c4f-r6tfl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali563063ca3e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.110 [INFO][5812] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.110 [INFO][5812] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" iface="eth0" netns="" Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.110 [INFO][5812] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.110 [INFO][5812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.132 [INFO][5818] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" HandleID="k8s-pod-network.6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.132 [INFO][5818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.132 [INFO][5818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.137 [WARNING][5818] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" HandleID="k8s-pod-network.6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.137 [INFO][5818] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" HandleID="k8s-pod-network.6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.138 [INFO][5818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:04.143646 containerd[1621]: 2025-01-17 12:30:04.141 [INFO][5812] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:30:04.144103 containerd[1621]: time="2025-01-17T12:30:04.143663555Z" level=info msg="TearDown network for sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\" successfully" Jan 17 12:30:04.144103 containerd[1621]: time="2025-01-17T12:30:04.143688412Z" level=info msg="StopPodSandbox for \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\" returns successfully" Jan 17 12:30:04.144151 containerd[1621]: time="2025-01-17T12:30:04.144131223Z" level=info msg="RemovePodSandbox for \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\"" Jan 17 12:30:04.144172 containerd[1621]: time="2025-01-17T12:30:04.144155018Z" level=info msg="Forcibly stopping sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\"" Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.177 [WARNING][5837] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0", GenerateName:"calico-apiserver-66d8498c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b65d841a-9846-425f-b276-4a75e065c2c8", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8498c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"6f3209911f48b489520adb8cfe559c03858ba335f8890513ae26e28ab285916d", Pod:"calico-apiserver-66d8498c4f-r6tfl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali563063ca3e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.178 [INFO][5837] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.178 [INFO][5837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" iface="eth0" netns="" Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.178 [INFO][5837] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.178 [INFO][5837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.198 [INFO][5843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" HandleID="k8s-pod-network.6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.199 [INFO][5843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.199 [INFO][5843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.204 [WARNING][5843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" HandleID="k8s-pod-network.6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.204 [INFO][5843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" HandleID="k8s-pod-network.6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--apiserver--66d8498c4f--r6tfl-eth0" Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.205 [INFO][5843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:04.211614 containerd[1621]: 2025-01-17 12:30:04.208 [INFO][5837] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522" Jan 17 12:30:04.211614 containerd[1621]: time="2025-01-17T12:30:04.211087550Z" level=info msg="TearDown network for sandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\" successfully" Jan 17 12:30:04.216110 containerd[1621]: time="2025-01-17T12:30:04.215910187Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:30:04.216110 containerd[1621]: time="2025-01-17T12:30:04.215986811Z" level=info msg="RemovePodSandbox \"6e7d9c0f55fb8a4f9dd454c1723687aba4a2f33289c3c4244cd1d6cf01c9a522\" returns successfully" Jan 17 12:30:04.905745 systemd[1]: run-containerd-runc-k8s.io-4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2-runc.Nc5R8d.mount: Deactivated successfully. Jan 17 12:30:26.312106 containerd[1621]: time="2025-01-17T12:30:26.312061343Z" level=info msg="StopContainer for \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\" with timeout 300 (s)" Jan 17 12:30:26.313990 containerd[1621]: time="2025-01-17T12:30:26.313960977Z" level=info msg="Stop container \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\" with signal terminated" Jan 17 12:30:26.436409 systemd[1]: run-containerd-runc-k8s.io-7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc-runc.MuaTyy.mount: Deactivated successfully. Jan 17 12:30:26.453431 containerd[1621]: time="2025-01-17T12:30:26.453322404Z" level=info msg="StopContainer for \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\" with timeout 30 (s)" Jan 17 12:30:26.455653 containerd[1621]: time="2025-01-17T12:30:26.455620665Z" level=info msg="Stop container \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\" with signal terminated" Jan 17 12:30:26.600686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2-rootfs.mount: Deactivated successfully. Jan 17 12:30:26.620772 containerd[1621]: time="2025-01-17T12:30:26.613343759Z" level=info msg="shim disconnected" id=4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2 namespace=k8s.io Jan 17 12:30:26.625049 containerd[1621]: time="2025-01-17T12:30:26.624716844Z" level=warning msg="cleaning up after shim disconnected" id=4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2 namespace=k8s.io Jan 17 12:30:26.625049 containerd[1621]: time="2025-01-17T12:30:26.624811090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:26.665945 containerd[1621]: time="2025-01-17T12:30:26.665864553Z" level=info msg="StopContainer for \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\" with timeout 5 (s)" Jan 17 12:30:26.666397 containerd[1621]: time="2025-01-17T12:30:26.666314216Z" level=info msg="Stop container \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\" with signal terminated" Jan 17 12:30:26.675822 containerd[1621]: time="2025-01-17T12:30:26.675747322Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:30:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:30:26.687703 containerd[1621]: time="2025-01-17T12:30:26.687626126Z" level=info msg="StopContainer for \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\" returns successfully" Jan 17 12:30:26.694144 containerd[1621]: time="2025-01-17T12:30:26.688533648Z" level=info msg="StopPodSandbox for \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\"" Jan 17 12:30:26.694144 containerd[1621]: time="2025-01-17T12:30:26.694066738Z" level=info msg="Container to stop \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:30:26.703193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4-shm.mount: Deactivated successfully. Jan 17 12:30:26.742332 containerd[1621]: time="2025-01-17T12:30:26.742274951Z" level=info msg="shim disconnected" id=32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4 namespace=k8s.io Jan 17 12:30:26.745309 containerd[1621]: time="2025-01-17T12:30:26.743137631Z" level=warning msg="cleaning up after shim disconnected" id=32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4 namespace=k8s.io Jan 17 12:30:26.749065 containerd[1621]: time="2025-01-17T12:30:26.745400646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:26.752696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4-rootfs.mount: Deactivated successfully. Jan 17 12:30:26.785539 containerd[1621]: time="2025-01-17T12:30:26.785493956Z" level=info msg="shim disconnected" id=7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc namespace=k8s.io Jan 17 12:30:26.786677 containerd[1621]: time="2025-01-17T12:30:26.786655415Z" level=warning msg="cleaning up after shim disconnected" id=7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc namespace=k8s.io Jan 17 12:30:26.786768 containerd[1621]: time="2025-01-17T12:30:26.786753860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:26.812138 containerd[1621]: time="2025-01-17T12:30:26.812105898Z" level=info msg="StopContainer for \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\" returns successfully" Jan 17 12:30:26.814148 containerd[1621]: time="2025-01-17T12:30:26.814112352Z" level=info msg="StopPodSandbox for \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\"" Jan 17 12:30:26.814307 containerd[1621]: time="2025-01-17T12:30:26.814168186Z" level=info msg="Container to stop \"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:30:26.814307 containerd[1621]: time="2025-01-17T12:30:26.814199175Z" level=info msg="Container to stop \"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:30:26.814307 containerd[1621]: time="2025-01-17T12:30:26.814212540Z" level=info msg="Container to stop \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:30:26.849823 containerd[1621]: time="2025-01-17T12:30:26.849553446Z" level=info msg="shim disconnected" id=cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf namespace=k8s.io Jan 17 12:30:26.850110 containerd[1621]: time="2025-01-17T12:30:26.849993191Z" level=warning msg="cleaning up after shim disconnected" id=cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf namespace=k8s.io Jan 17 12:30:26.850110 containerd[1621]: time="2025-01-17T12:30:26.850014892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:26.868132 systemd-networkd[1250]: cali14cbdf9f0da: Link DOWN Jan 17 12:30:26.868144 systemd-networkd[1250]: cali14cbdf9f0da: Lost carrier Jan 17 12:30:26.891196 containerd[1621]: time="2025-01-17T12:30:26.890976281Z" level=info msg="TearDown network for sandbox \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" successfully" Jan 17 12:30:26.891196 containerd[1621]: time="2025-01-17T12:30:26.891005386Z" level=info msg="StopPodSandbox for \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" returns successfully" Jan 17 12:30:26.936885 kubelet[3028]: I0117 12:30:26.935511 3028 topology_manager.go:215] "Topology Admit Handler" podUID="d90be92b-9f29-4965-bf86-939cd91f191e" podNamespace="calico-system" podName="calico-node-v75t2" Jan 17 12:30:26.938502 kubelet[3028]: E0117 12:30:26.938288 3028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f286a43-3415-4b62-af04-ec5ba12ddf00" containerName="install-cni" Jan 17 12:30:26.938502 kubelet[3028]: E0117 12:30:26.938321 3028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f286a43-3415-4b62-af04-ec5ba12ddf00" containerName="calico-node" Jan 17 12:30:26.938502 kubelet[3028]: E0117 12:30:26.938357 3028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f286a43-3415-4b62-af04-ec5ba12ddf00" containerName="flexvol-driver" Jan 17 12:30:26.940067 kubelet[3028]: I0117 12:30:26.939779 3028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f286a43-3415-4b62-af04-ec5ba12ddf00" containerName="calico-node" Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.865 [INFO][6043] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.866 [INFO][6043] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" iface="eth0" netns="/var/run/netns/cni-5d82de07-edfa-cbcf-bc76-50766a75bf0d" Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.866 [INFO][6043] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" iface="eth0" netns="/var/run/netns/cni-5d82de07-edfa-cbcf-bc76-50766a75bf0d" Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.875 [INFO][6043] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" after=9.142701ms iface="eth0" netns="/var/run/netns/cni-5d82de07-edfa-cbcf-bc76-50766a75bf0d" Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.875 [INFO][6043] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.875 [INFO][6043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.906 [INFO][6084] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.906 [INFO][6084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.906 [INFO][6084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.955 [INFO][6084] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.955 [INFO][6084] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.960 [INFO][6084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:26.966733 containerd[1621]: 2025-01-17 12:30:26.964 [INFO][6043] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:30:26.968405 containerd[1621]: time="2025-01-17T12:30:26.967015173Z" level=info msg="TearDown network for sandbox \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\" successfully" Jan 17 12:30:26.968405 containerd[1621]: time="2025-01-17T12:30:26.967041552Z" level=info msg="StopPodSandbox for \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\" returns successfully" Jan 17 12:30:27.061665 kubelet[3028]: I0117 12:30:27.061592 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-var-lib-calico\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.061832 kubelet[3028]: I0117 12:30:27.061701 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-flexvol-driver-host\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.061832 kubelet[3028]: I0117 12:30:27.061742 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-lib-modules\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.061832 kubelet[3028]: I0117 12:30:27.061771 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-log-dir\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.061832 kubelet[3028]: I0117 12:30:27.061809 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4f286a43-3415-4b62-af04-ec5ba12ddf00-node-certs\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.061958 kubelet[3028]: I0117 12:30:27.061847 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skk9m\" (UniqueName: \"kubernetes.io/projected/5a63cf5e-1b3a-4788-8363-8dde4df3c8b8-kube-api-access-skk9m\") pod \"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8\" (UID: \"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8\") " Jan 17 12:30:27.061958 kubelet[3028]: I0117 12:30:27.061884 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f286a43-3415-4b62-af04-ec5ba12ddf00-tigera-ca-bundle\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.061958 kubelet[3028]: I0117 12:30:27.061910 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-policysync\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.061958 kubelet[3028]: I0117 12:30:27.061941 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-var-run-calico\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.062046 kubelet[3028]: I0117 12:30:27.061973 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a63cf5e-1b3a-4788-8363-8dde4df3c8b8-tigera-ca-bundle\") pod \"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8\" (UID: \"5a63cf5e-1b3a-4788-8363-8dde4df3c8b8\") " Jan 17 12:30:27.062046 kubelet[3028]: I0117 12:30:27.062000 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-net-dir\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.062046 kubelet[3028]: I0117 12:30:27.062022 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-bin-dir\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.062106 kubelet[3028]: I0117 12:30:27.062048 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-xtables-lock\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.062106 kubelet[3028]: I0117 12:30:27.062084 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5j9w\" (UniqueName: \"kubernetes.io/projected/4f286a43-3415-4b62-af04-ec5ba12ddf00-kube-api-access-c5j9w\") pod \"4f286a43-3415-4b62-af04-ec5ba12ddf00\" (UID: \"4f286a43-3415-4b62-af04-ec5ba12ddf00\") " Jan 17 12:30:27.063854 kubelet[3028]: I0117 12:30:27.063817 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d90be92b-9f29-4965-bf86-939cd91f191e-cni-net-dir\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.063923 kubelet[3028]: I0117 12:30:27.063878 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d90be92b-9f29-4965-bf86-939cd91f191e-node-certs\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.063923 kubelet[3028]: I0117 12:30:27.063898 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94fg9\" (UniqueName: \"kubernetes.io/projected/d90be92b-9f29-4965-bf86-939cd91f191e-kube-api-access-94fg9\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.063923 kubelet[3028]: I0117 12:30:27.063922 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d90be92b-9f29-4965-bf86-939cd91f191e-var-run-calico\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.064018 kubelet[3028]: I0117 12:30:27.063952 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d90be92b-9f29-4965-bf86-939cd91f191e-lib-modules\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.064018 kubelet[3028]: I0117 12:30:27.063978 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d90be92b-9f29-4965-bf86-939cd91f191e-cni-bin-dir\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.064018 kubelet[3028]: I0117 12:30:27.064008 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d90be92b-9f29-4965-bf86-939cd91f191e-cni-log-dir\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.064248 kubelet[3028]: I0117 12:30:27.064026 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d90be92b-9f29-4965-bf86-939cd91f191e-flexvol-driver-host\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.066635 kubelet[3028]: I0117 12:30:27.065059 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:30:27.066635 kubelet[3028]: I0117 12:30:27.066487 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:30:27.066635 kubelet[3028]: I0117 12:30:27.066505 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:30:27.066635 kubelet[3028]: I0117 12:30:27.066525 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:30:27.066635 kubelet[3028]: I0117 12:30:27.066538 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d90be92b-9f29-4965-bf86-939cd91f191e-tigera-ca-bundle\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.066807 kubelet[3028]: I0117 12:30:27.066581 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d90be92b-9f29-4965-bf86-939cd91f191e-var-lib-calico\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.066807 kubelet[3028]: I0117 12:30:27.066603 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d90be92b-9f29-4965-bf86-939cd91f191e-xtables-lock\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.066807 kubelet[3028]: I0117 12:30:27.066620 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d90be92b-9f29-4965-bf86-939cd91f191e-policysync\") pod \"calico-node-v75t2\" (UID: \"d90be92b-9f29-4965-bf86-939cd91f191e\") " pod="calico-system/calico-node-v75t2" Jan 17 12:30:27.066807 kubelet[3028]: I0117 12:30:27.066644 3028 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-var-lib-calico\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.066807 kubelet[3028]: I0117 12:30:27.066655 3028 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-flexvol-driver-host\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.066807 kubelet[3028]: I0117 12:30:27.066666 3028 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-lib-modules\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.066807 kubelet[3028]: I0117 12:30:27.066676 3028 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-log-dir\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.071893 kubelet[3028]: I0117 12:30:27.071460 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f286a43-3415-4b62-af04-ec5ba12ddf00-node-certs" (OuterVolumeSpecName: "node-certs") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:30:27.071893 kubelet[3028]: I0117 12:30:27.071603 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:30:27.072465 kubelet[3028]: I0117 12:30:27.072393 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f286a43-3415-4b62-af04-ec5ba12ddf00-kube-api-access-c5j9w" (OuterVolumeSpecName: "kube-api-access-c5j9w") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "kube-api-access-c5j9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:30:27.075332 kubelet[3028]: I0117 12:30:27.075308 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a63cf5e-1b3a-4788-8363-8dde4df3c8b8-kube-api-access-skk9m" (OuterVolumeSpecName: "kube-api-access-skk9m") pod "5a63cf5e-1b3a-4788-8363-8dde4df3c8b8" (UID: "5a63cf5e-1b3a-4788-8363-8dde4df3c8b8"). InnerVolumeSpecName "kube-api-access-skk9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:30:27.080273 kubelet[3028]: I0117 12:30:27.080251 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-policysync" (OuterVolumeSpecName: "policysync") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:30:27.081040 kubelet[3028]: I0117 12:30:27.081009 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a63cf5e-1b3a-4788-8363-8dde4df3c8b8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "5a63cf5e-1b3a-4788-8363-8dde4df3c8b8" (UID: "5a63cf5e-1b3a-4788-8363-8dde4df3c8b8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:30:27.081109 kubelet[3028]: I0117 12:30:27.081064 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:30:27.081109 kubelet[3028]: I0117 12:30:27.081082 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:30:27.081285 kubelet[3028]: I0117 12:30:27.081168 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:30:27.081805 kubelet[3028]: I0117 12:30:27.081771 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f286a43-3415-4b62-af04-ec5ba12ddf00-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "4f286a43-3415-4b62-af04-ec5ba12ddf00" (UID: "4f286a43-3415-4b62-af04-ec5ba12ddf00"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:30:27.167153 kubelet[3028]: I0117 12:30:27.167025 3028 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f286a43-3415-4b62-af04-ec5ba12ddf00-tigera-ca-bundle\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.167153 kubelet[3028]: I0117 12:30:27.167060 3028 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-net-dir\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.167153 kubelet[3028]: I0117 12:30:27.167073 3028 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-cni-bin-dir\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.167153 kubelet[3028]: I0117 12:30:27.167089 3028 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4f286a43-3415-4b62-af04-ec5ba12ddf00-node-certs\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.167153 kubelet[3028]: I0117 12:30:27.167100 3028 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-policysync\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.167153 kubelet[3028]: I0117 12:30:27.167111 3028 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-var-run-calico\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.167153 kubelet[3028]: I0117 12:30:27.167122 3028 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a63cf5e-1b3a-4788-8363-8dde4df3c8b8-tigera-ca-bundle\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.168845 kubelet[3028]: I0117 12:30:27.167133 3028 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f286a43-3415-4b62-af04-ec5ba12ddf00-xtables-lock\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.168845 kubelet[3028]: I0117 12:30:27.168268 3028 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c5j9w\" (UniqueName: \"kubernetes.io/projected/4f286a43-3415-4b62-af04-ec5ba12ddf00-kube-api-access-c5j9w\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.168845 kubelet[3028]: I0117 12:30:27.168287 3028 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-skk9m\" (UniqueName: \"kubernetes.io/projected/5a63cf5e-1b3a-4788-8363-8dde4df3c8b8-kube-api-access-skk9m\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.252676 containerd[1621]: time="2025-01-17T12:30:27.252520724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v75t2,Uid:d90be92b-9f29-4965-bf86-939cd91f191e,Namespace:calico-system,Attempt:0,}" Jan 17 12:30:27.313696 containerd[1621]: time="2025-01-17T12:30:27.313611514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:30:27.314341 containerd[1621]: time="2025-01-17T12:30:27.314149623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:30:27.314341 containerd[1621]: time="2025-01-17T12:30:27.314166996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:27.314341 containerd[1621]: time="2025-01-17T12:30:27.314268667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:27.365892 containerd[1621]: time="2025-01-17T12:30:27.365622252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v75t2,Uid:d90be92b-9f29-4965-bf86-939cd91f191e,Namespace:calico-system,Attempt:0,} returns sandbox id \"18d57f3d7ae88c80d1643bad9e03f4193e9eefaef9d9dbefa233e06d9cd9bbd0\"" Jan 17 12:30:27.374881 containerd[1621]: time="2025-01-17T12:30:27.374775332Z" level=info msg="CreateContainer within sandbox \"18d57f3d7ae88c80d1643bad9e03f4193e9eefaef9d9dbefa233e06d9cd9bbd0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:30:27.390354 containerd[1621]: time="2025-01-17T12:30:27.389799997Z" level=info msg="shim disconnected" id=92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19 namespace=k8s.io Jan 17 12:30:27.390354 containerd[1621]: time="2025-01-17T12:30:27.390172596Z" level=warning msg="cleaning up after shim disconnected" id=92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19 namespace=k8s.io Jan 17 12:30:27.390354 containerd[1621]: time="2025-01-17T12:30:27.390202292Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:27.390354 containerd[1621]: time="2025-01-17T12:30:27.390127852Z" level=info msg="CreateContainer within sandbox \"18d57f3d7ae88c80d1643bad9e03f4193e9eefaef9d9dbefa233e06d9cd9bbd0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6b25b920d97e16c1e7717f1aaadf29323aaeec220459aa77ec15dabe6d8ad991\"" Jan 17 12:30:27.392955 containerd[1621]: time="2025-01-17T12:30:27.392757955Z" level=info msg="StartContainer for \"6b25b920d97e16c1e7717f1aaadf29323aaeec220459aa77ec15dabe6d8ad991\"" Jan 17 12:30:27.427648 containerd[1621]: time="2025-01-17T12:30:27.427061496Z" level=info msg="StopContainer for \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\" returns successfully" Jan 17 12:30:27.433036 containerd[1621]: time="2025-01-17T12:30:27.430400790Z" level=info msg="StopPodSandbox for \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\"" Jan 17 12:30:27.433036 containerd[1621]: time="2025-01-17T12:30:27.432990208Z" level=info msg="Container to stop \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:30:27.432194 systemd[1]: var-lib-kubelet-pods-5a63cf5e\x2d1b3a\x2d4788\x2d8363\x2d8dde4df3c8b8-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 17 12:30:27.432368 systemd[1]: run-netns-cni\x2d5d82de07\x2dedfa\x2dcbcf\x2dbc76\x2d50766a75bf0d.mount: Deactivated successfully. Jan 17 12:30:27.432508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc-rootfs.mount: Deactivated successfully. Jan 17 12:30:27.432631 systemd[1]: var-lib-kubelet-pods-4f286a43\x2d3415\x2d4b62\x2daf04\x2dec5ba12ddf00-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 17 12:30:27.432747 systemd[1]: var-lib-kubelet-pods-5a63cf5e\x2d1b3a\x2d4788\x2d8363\x2d8dde4df3c8b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskk9m.mount: Deactivated successfully. Jan 17 12:30:27.433713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19-rootfs.mount: Deactivated successfully. Jan 17 12:30:27.433847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf-rootfs.mount: Deactivated successfully. Jan 17 12:30:27.434359 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf-shm.mount: Deactivated successfully. Jan 17 12:30:27.434502 systemd[1]: var-lib-kubelet-pods-4f286a43\x2d3415\x2d4b62\x2daf04\x2dec5ba12ddf00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc5j9w.mount: Deactivated successfully. Jan 17 12:30:27.434629 systemd[1]: var-lib-kubelet-pods-4f286a43\x2d3415\x2d4b62\x2daf04\x2dec5ba12ddf00-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 17 12:30:27.449319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc-shm.mount: Deactivated successfully. Jan 17 12:30:27.504265 containerd[1621]: time="2025-01-17T12:30:27.504039520Z" level=info msg="shim disconnected" id=e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc namespace=k8s.io Jan 17 12:30:27.504265 containerd[1621]: time="2025-01-17T12:30:27.504090395Z" level=warning msg="cleaning up after shim disconnected" id=e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc namespace=k8s.io Jan 17 12:30:27.504265 containerd[1621]: time="2025-01-17T12:30:27.504098931Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:27.508690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc-rootfs.mount: Deactivated successfully. Jan 17 12:30:27.513936 containerd[1621]: time="2025-01-17T12:30:27.512362543Z" level=info msg="StartContainer for \"6b25b920d97e16c1e7717f1aaadf29323aaeec220459aa77ec15dabe6d8ad991\" returns successfully" Jan 17 12:30:27.531954 containerd[1621]: time="2025-01-17T12:30:27.531893862Z" level=info msg="TearDown network for sandbox \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\" successfully" Jan 17 12:30:27.531954 containerd[1621]: time="2025-01-17T12:30:27.531941512Z" level=info msg="StopPodSandbox for \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\" returns successfully" Jan 17 12:30:27.573597 kubelet[3028]: I0117 12:30:27.572500 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eaf9f44c-94bd-4510-b426-db623b339a59-typha-certs\") pod \"eaf9f44c-94bd-4510-b426-db623b339a59\" (UID: \"eaf9f44c-94bd-4510-b426-db623b339a59\") " Jan 17 12:30:27.573597 kubelet[3028]: I0117 12:30:27.572558 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-449lv\" (UniqueName: \"kubernetes.io/projected/eaf9f44c-94bd-4510-b426-db623b339a59-kube-api-access-449lv\") pod \"eaf9f44c-94bd-4510-b426-db623b339a59\" (UID: \"eaf9f44c-94bd-4510-b426-db623b339a59\") " Jan 17 12:30:27.573597 kubelet[3028]: I0117 12:30:27.572595 3028 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaf9f44c-94bd-4510-b426-db623b339a59-tigera-ca-bundle\") pod \"eaf9f44c-94bd-4510-b426-db623b339a59\" (UID: \"eaf9f44c-94bd-4510-b426-db623b339a59\") " Jan 17 12:30:27.583004 kubelet[3028]: I0117 12:30:27.582959 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaf9f44c-94bd-4510-b426-db623b339a59-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "eaf9f44c-94bd-4510-b426-db623b339a59" (UID: "eaf9f44c-94bd-4510-b426-db623b339a59"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:30:27.583524 systemd[1]: var-lib-kubelet-pods-eaf9f44c\x2d94bd\x2d4510\x2db426\x2ddb623b339a59-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 17 12:30:27.589889 kubelet[3028]: I0117 12:30:27.588923 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf9f44c-94bd-4510-b426-db623b339a59-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "eaf9f44c-94bd-4510-b426-db623b339a59" (UID: "eaf9f44c-94bd-4510-b426-db623b339a59"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:30:27.592989 kubelet[3028]: I0117 12:30:27.591794 3028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf9f44c-94bd-4510-b426-db623b339a59-kube-api-access-449lv" (OuterVolumeSpecName: "kube-api-access-449lv") pod "eaf9f44c-94bd-4510-b426-db623b339a59" (UID: "eaf9f44c-94bd-4510-b426-db623b339a59"). InnerVolumeSpecName "kube-api-access-449lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:30:27.601360 kubelet[3028]: I0117 12:30:27.601327 3028 scope.go:117] "RemoveContainer" containerID="4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2" Jan 17 12:30:27.634609 containerd[1621]: time="2025-01-17T12:30:27.634557218Z" level=info msg="shim disconnected" id=6b25b920d97e16c1e7717f1aaadf29323aaeec220459aa77ec15dabe6d8ad991 namespace=k8s.io Jan 17 12:30:27.634824 containerd[1621]: time="2025-01-17T12:30:27.634807788Z" level=warning msg="cleaning up after shim disconnected" id=6b25b920d97e16c1e7717f1aaadf29323aaeec220459aa77ec15dabe6d8ad991 namespace=k8s.io Jan 17 12:30:27.634894 containerd[1621]: time="2025-01-17T12:30:27.634881157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:27.668535 containerd[1621]: time="2025-01-17T12:30:27.668465297Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:30:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:30:27.677745 containerd[1621]: time="2025-01-17T12:30:27.677598270Z" level=info msg="RemoveContainer for \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\"" Jan 17 12:30:27.688197 containerd[1621]: time="2025-01-17T12:30:27.688083299Z" level=info msg="RemoveContainer for \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\" returns successfully" Jan 17 12:30:27.691613 kubelet[3028]: I0117 12:30:27.691444 3028 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaf9f44c-94bd-4510-b426-db623b339a59-tigera-ca-bundle\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.691613 kubelet[3028]: I0117 12:30:27.691481 3028 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eaf9f44c-94bd-4510-b426-db623b339a59-typha-certs\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.691613 kubelet[3028]: I0117 12:30:27.691493 3028 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-449lv\" (UniqueName: \"kubernetes.io/projected/eaf9f44c-94bd-4510-b426-db623b339a59-kube-api-access-449lv\") on node \"ci-4081-3-0-6-80d8e78ae3\" DevicePath \"\"" Jan 17 12:30:27.694443 kubelet[3028]: I0117 12:30:27.694401 3028 scope.go:117] "RemoveContainer" containerID="4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2" Jan 17 12:30:27.700943 containerd[1621]: time="2025-01-17T12:30:27.694675915Z" level=error msg="ContainerStatus for \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\": not found" Jan 17 12:30:27.716005 kubelet[3028]: E0117 12:30:27.715986 3028 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\": not found" containerID="4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2" Jan 17 12:30:27.716226 kubelet[3028]: I0117 12:30:27.716212 3028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2"} err="failed to get container status \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c1263c638a788fce1e8f8149791ef438670fd3f16385ed6caad7c2f7cefd0f2\": not found" Jan 17 12:30:27.716331 kubelet[3028]: I0117 12:30:27.716320 3028 scope.go:117] "RemoveContainer" containerID="92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19" Jan 17 12:30:27.717710 containerd[1621]: time="2025-01-17T12:30:27.717682613Z" level=info msg="RemoveContainer for \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\"" Jan 17 12:30:27.721450 containerd[1621]: time="2025-01-17T12:30:27.721429272Z" level=info msg="RemoveContainer for \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\" returns successfully" Jan 17 12:30:27.721639 kubelet[3028]: I0117 12:30:27.721620 3028 scope.go:117] "RemoveContainer" containerID="92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19" Jan 17 12:30:27.721882 containerd[1621]: time="2025-01-17T12:30:27.721858147Z" level=error msg="ContainerStatus for \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\": not found" Jan 17 12:30:27.722094 kubelet[3028]: E0117 12:30:27.722081 3028 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\": not found" containerID="92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19" Jan 17 12:30:27.722209 kubelet[3028]: I0117 12:30:27.722169 3028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19"} err="failed to get container status \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\": rpc error: code = NotFound desc = an error occurred when try to find container \"92ed01157c12c85abfdbef84ff433b3f0a90fee7d01934a3a0ce0fdf5a0f8e19\": not found" Jan 17 12:30:27.722322 kubelet[3028]: I0117 12:30:27.722264 3028 scope.go:117] "RemoveContainer" containerID="7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc" Jan 17 12:30:27.723211 containerd[1621]: time="2025-01-17T12:30:27.723173174Z" level=info msg="RemoveContainer for \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\"" Jan 17 12:30:27.726705 containerd[1621]: time="2025-01-17T12:30:27.726676376Z" level=info msg="RemoveContainer for \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\" returns successfully" Jan 17 12:30:27.726887 kubelet[3028]: I0117 12:30:27.726875 3028 scope.go:117] "RemoveContainer" containerID="7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9" Jan 17 12:30:27.728471 containerd[1621]: time="2025-01-17T12:30:27.728454081Z" level=info msg="RemoveContainer for \"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9\"" Jan 17 12:30:27.731869 containerd[1621]: time="2025-01-17T12:30:27.731850161Z" level=info msg="RemoveContainer for \"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9\" returns successfully" Jan 17 12:30:27.732158 kubelet[3028]: I0117 12:30:27.732127 3028 scope.go:117] "RemoveContainer" containerID="d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b" Jan 17 12:30:27.733100 containerd[1621]: time="2025-01-17T12:30:27.733081671Z" level=info msg="RemoveContainer for \"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b\"" Jan 17 12:30:27.736703 containerd[1621]: time="2025-01-17T12:30:27.736617113Z" level=info msg="RemoveContainer for \"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b\" returns successfully" Jan 17 12:30:27.736893 kubelet[3028]: I0117 12:30:27.736760 3028 scope.go:117] "RemoveContainer" containerID="7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc" Jan 17 12:30:27.737063 containerd[1621]: time="2025-01-17T12:30:27.737003919Z" level=error msg="ContainerStatus for \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\": not found" Jan 17 12:30:27.737245 kubelet[3028]: E0117 12:30:27.737168 3028 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\": not found" containerID="7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc" Jan 17 12:30:27.737336 kubelet[3028]: I0117 12:30:27.737256 3028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc"} err="failed to get container status \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ac8e8984b37f83a7c244b0ac04d2f454a033147fca00e2ff199bbcf94ffb4fc\": not found" Jan 17 12:30:27.737336 kubelet[3028]: I0117 12:30:27.737267 3028 scope.go:117] "RemoveContainer" containerID="7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9" Jan 17 12:30:27.737486 containerd[1621]: time="2025-01-17T12:30:27.737390614Z" level=error msg="ContainerStatus for \"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9\": not found" Jan 17 12:30:27.737524 kubelet[3028]: E0117 12:30:27.737497 3028 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9\": not found" containerID="7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9" Jan 17 12:30:27.737524 kubelet[3028]: I0117 12:30:27.737518 3028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9"} err="failed to get container status \"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7526dcf74c86abcbd15530da9e31ab794d4f3b2f929419fbcb28c50d9e0224a9\": not found" Jan 17 12:30:27.737589 kubelet[3028]: I0117 12:30:27.737526 3028 scope.go:117] "RemoveContainer" containerID="d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b" Jan 17 12:30:27.737682 containerd[1621]: time="2025-01-17T12:30:27.737637137Z" level=error msg="ContainerStatus for \"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b\": not found" Jan 17 12:30:27.737812 kubelet[3028]: E0117 12:30:27.737788 3028 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b\": not found" containerID="d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b" Jan 17 12:30:27.737879 kubelet[3028]: I0117 12:30:27.737857 3028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b"} err="failed to get container status \"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d76720cd2cc90116bbe58e903fd9b6d9a46f6d0b70134ebacc9477ce7d99d88b\": not found" Jan 17 12:30:28.423866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b25b920d97e16c1e7717f1aaadf29323aaeec220459aa77ec15dabe6d8ad991-rootfs.mount: Deactivated successfully. Jan 17 12:30:28.424071 systemd[1]: var-lib-kubelet-pods-eaf9f44c\x2d94bd\x2d4510\x2db426\x2ddb623b339a59-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d449lv.mount: Deactivated successfully. Jan 17 12:30:28.424257 systemd[1]: var-lib-kubelet-pods-eaf9f44c\x2d94bd\x2d4510\x2db426\x2ddb623b339a59-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 17 12:30:28.695352 containerd[1621]: time="2025-01-17T12:30:28.694488532Z" level=info msg="CreateContainer within sandbox \"18d57f3d7ae88c80d1643bad9e03f4193e9eefaef9d9dbefa233e06d9cd9bbd0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:30:28.715698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281785536.mount: Deactivated successfully. Jan 17 12:30:28.720080 containerd[1621]: time="2025-01-17T12:30:28.720025016Z" level=info msg="CreateContainer within sandbox \"18d57f3d7ae88c80d1643bad9e03f4193e9eefaef9d9dbefa233e06d9cd9bbd0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"878a191983ccc94c1450c1e6a614746135739e3038035fb6d25376f8d4f8f7ad\"" Jan 17 12:30:28.720707 containerd[1621]: time="2025-01-17T12:30:28.720593363Z" level=info msg="StartContainer for \"878a191983ccc94c1450c1e6a614746135739e3038035fb6d25376f8d4f8f7ad\"" Jan 17 12:30:28.782477 containerd[1621]: time="2025-01-17T12:30:28.782430493Z" level=info msg="StartContainer for \"878a191983ccc94c1450c1e6a614746135739e3038035fb6d25376f8d4f8f7ad\" returns successfully" Jan 17 12:30:29.223522 kubelet[3028]: I0117 12:30:29.223474 3028 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4f286a43-3415-4b62-af04-ec5ba12ddf00" path="/var/lib/kubelet/pods/4f286a43-3415-4b62-af04-ec5ba12ddf00/volumes" Jan 17 12:30:29.226606 kubelet[3028]: I0117 12:30:29.226565 3028 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5a63cf5e-1b3a-4788-8363-8dde4df3c8b8" path="/var/lib/kubelet/pods/5a63cf5e-1b3a-4788-8363-8dde4df3c8b8/volumes" Jan 17 12:30:29.227550 kubelet[3028]: I0117 12:30:29.227524 3028 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="eaf9f44c-94bd-4510-b426-db623b339a59" path="/var/lib/kubelet/pods/eaf9f44c-94bd-4510-b426-db623b339a59/volumes" Jan 17 12:30:29.636152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-878a191983ccc94c1450c1e6a614746135739e3038035fb6d25376f8d4f8f7ad-rootfs.mount: Deactivated successfully. Jan 17 12:30:29.642686 containerd[1621]: time="2025-01-17T12:30:29.642617547Z" level=info msg="shim disconnected" id=878a191983ccc94c1450c1e6a614746135739e3038035fb6d25376f8d4f8f7ad namespace=k8s.io Jan 17 12:30:29.642686 containerd[1621]: time="2025-01-17T12:30:29.642671418Z" level=warning msg="cleaning up after shim disconnected" id=878a191983ccc94c1450c1e6a614746135739e3038035fb6d25376f8d4f8f7ad namespace=k8s.io Jan 17 12:30:29.642686 containerd[1621]: time="2025-01-17T12:30:29.642679704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:29.725588 containerd[1621]: time="2025-01-17T12:30:29.725528301Z" level=info msg="CreateContainer within sandbox \"18d57f3d7ae88c80d1643bad9e03f4193e9eefaef9d9dbefa233e06d9cd9bbd0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:30:29.747210 containerd[1621]: time="2025-01-17T12:30:29.745709499Z" level=info msg="CreateContainer within sandbox \"18d57f3d7ae88c80d1643bad9e03f4193e9eefaef9d9dbefa233e06d9cd9bbd0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9fb5ae0d845496899bb6129f5ee5d75a721f36093ba7b717162a9a582c63b7e2\"" Jan 17 12:30:29.748575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1895025454.mount: Deactivated successfully. Jan 17 12:30:29.751628 containerd[1621]: time="2025-01-17T12:30:29.749021211Z" level=info msg="StartContainer for \"9fb5ae0d845496899bb6129f5ee5d75a721f36093ba7b717162a9a582c63b7e2\"" Jan 17 12:30:29.811170 containerd[1621]: time="2025-01-17T12:30:29.811120113Z" level=info msg="StartContainer for \"9fb5ae0d845496899bb6129f5ee5d75a721f36093ba7b717162a9a582c63b7e2\" returns successfully" Jan 17 12:30:29.903271 kubelet[3028]: I0117 12:30:29.903159 3028 topology_manager.go:215] "Topology Admit Handler" podUID="bb0a490d-9637-42f4-9d41-3001adc1dea8" podNamespace="calico-system" podName="calico-kube-controllers-6fb49fb4b9-52spt" Jan 17 12:30:29.904223 kubelet[3028]: E0117 12:30:29.904013 3028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a63cf5e-1b3a-4788-8363-8dde4df3c8b8" containerName="calico-kube-controllers" Jan 17 12:30:29.905071 kubelet[3028]: E0117 12:30:29.905054 3028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eaf9f44c-94bd-4510-b426-db623b339a59" containerName="calico-typha" Jan 17 12:30:29.906087 kubelet[3028]: I0117 12:30:29.906072 3028 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a63cf5e-1b3a-4788-8363-8dde4df3c8b8" containerName="calico-kube-controllers" Jan 17 12:30:29.906234 kubelet[3028]: I0117 12:30:29.906221 3028 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf9f44c-94bd-4510-b426-db623b339a59" containerName="calico-typha" Jan 17 12:30:30.005143 kubelet[3028]: I0117 12:30:30.005092 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btfcl\" (UniqueName: \"kubernetes.io/projected/bb0a490d-9637-42f4-9d41-3001adc1dea8-kube-api-access-btfcl\") pod \"calico-kube-controllers-6fb49fb4b9-52spt\" (UID: \"bb0a490d-9637-42f4-9d41-3001adc1dea8\") " pod="calico-system/calico-kube-controllers-6fb49fb4b9-52spt" Jan 17 12:30:30.005143 kubelet[3028]: I0117 12:30:30.005141 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb0a490d-9637-42f4-9d41-3001adc1dea8-tigera-ca-bundle\") pod \"calico-kube-controllers-6fb49fb4b9-52spt\" (UID: \"bb0a490d-9637-42f4-9d41-3001adc1dea8\") " pod="calico-system/calico-kube-controllers-6fb49fb4b9-52spt" Jan 17 12:30:30.219853 containerd[1621]: time="2025-01-17T12:30:30.219467728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb49fb4b9-52spt,Uid:bb0a490d-9637-42f4-9d41-3001adc1dea8,Namespace:calico-system,Attempt:0,}" Jan 17 12:30:30.310349 kubelet[3028]: I0117 12:30:30.310306 3028 topology_manager.go:215] "Topology Admit Handler" podUID="56819374-4d91-402c-9bde-4c25e8a9620e" podNamespace="calico-system" podName="calico-typha-5966648df-g8c58" Jan 17 12:30:30.372533 systemd-networkd[1250]: cali21da9119f56: Link UP Jan 17 12:30:30.372761 systemd-networkd[1250]: cali21da9119f56: Gained carrier Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.287 [INFO][6394] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0 calico-kube-controllers-6fb49fb4b9- calico-system bb0a490d-9637-42f4-9d41-3001adc1dea8 1046 0 2025-01-17 12:30:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fb49fb4b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-6-80d8e78ae3 calico-kube-controllers-6fb49fb4b9-52spt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali21da9119f56 [] []}} ContainerID="3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" Namespace="calico-system" Pod="calico-kube-controllers-6fb49fb4b9-52spt" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.287 [INFO][6394] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" Namespace="calico-system" Pod="calico-kube-controllers-6fb49fb4b9-52spt" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.322 [INFO][6405] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" HandleID="k8s-pod-network.3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.336 [INFO][6405] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" HandleID="k8s-pod-network.3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec180), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-6-80d8e78ae3", "pod":"calico-kube-controllers-6fb49fb4b9-52spt", "timestamp":"2025-01-17 12:30:30.32271464 +0000 UTC"}, Hostname:"ci-4081-3-0-6-80d8e78ae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.336 [INFO][6405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.336 [INFO][6405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.336 [INFO][6405] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6-80d8e78ae3' Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.340 [INFO][6405] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.345 [INFO][6405] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.350 [INFO][6405] ipam/ipam.go 489: Trying affinity for 192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.352 [INFO][6405] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.354 [INFO][6405] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.354 [INFO][6405] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.356 [INFO][6405] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.361 [INFO][6405] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.366 [INFO][6405] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.199/26] block=192.168.73.192/26 handle="k8s-pod-network.3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.366 [INFO][6405] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.199/26] handle="k8s-pod-network.3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" host="ci-4081-3-0-6-80d8e78ae3" Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.366 [INFO][6405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:30:30.381404 containerd[1621]: 2025-01-17 12:30:30.366 [INFO][6405] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.199/26] IPv6=[] ContainerID="3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" HandleID="k8s-pod-network.3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0" Jan 17 12:30:30.383824 containerd[1621]: 2025-01-17 12:30:30.369 [INFO][6394] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" Namespace="calico-system" Pod="calico-kube-controllers-6fb49fb4b9-52spt" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0", GenerateName:"calico-kube-controllers-6fb49fb4b9-", Namespace:"calico-system", SelfLink:"", UID:"bb0a490d-9637-42f4-9d41-3001adc1dea8", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 30, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb49fb4b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"", Pod:"calico-kube-controllers-6fb49fb4b9-52spt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21da9119f56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:30.383824 containerd[1621]: 2025-01-17 12:30:30.370 [INFO][6394] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.199/32] ContainerID="3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" Namespace="calico-system" Pod="calico-kube-controllers-6fb49fb4b9-52spt" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0" Jan 17 12:30:30.383824 containerd[1621]: 2025-01-17 12:30:30.370 [INFO][6394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21da9119f56 ContainerID="3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" Namespace="calico-system" Pod="calico-kube-controllers-6fb49fb4b9-52spt" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0" Jan 17 12:30:30.383824 containerd[1621]: 2025-01-17 12:30:30.371 [INFO][6394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" Namespace="calico-system" Pod="calico-kube-controllers-6fb49fb4b9-52spt" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0" Jan 17 12:30:30.383824 containerd[1621]: 2025-01-17 12:30:30.371 [INFO][6394] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" Namespace="calico-system" Pod="calico-kube-controllers-6fb49fb4b9-52spt" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0", GenerateName:"calico-kube-controllers-6fb49fb4b9-", Namespace:"calico-system", SelfLink:"", UID:"bb0a490d-9637-42f4-9d41-3001adc1dea8", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 30, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb49fb4b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6-80d8e78ae3", ContainerID:"3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e", Pod:"calico-kube-controllers-6fb49fb4b9-52spt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21da9119f56", MAC:"f6:f1:1f:77:8a:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:30:30.383824 containerd[1621]: 2025-01-17 12:30:30.378 [INFO][6394] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e" Namespace="calico-system" Pod="calico-kube-controllers-6fb49fb4b9-52spt" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--6fb49fb4b9--52spt-eth0" Jan 17 12:30:30.404974 containerd[1621]: time="2025-01-17T12:30:30.404852203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:30:30.404974 containerd[1621]: time="2025-01-17T12:30:30.404910693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:30:30.404974 containerd[1621]: time="2025-01-17T12:30:30.404925020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:30.405203 containerd[1621]: time="2025-01-17T12:30:30.405005161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:30.454767 containerd[1621]: time="2025-01-17T12:30:30.454693732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb49fb4b9-52spt,Uid:bb0a490d-9637-42f4-9d41-3001adc1dea8,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e\"" Jan 17 12:30:30.461647 containerd[1621]: time="2025-01-17T12:30:30.461614755Z" level=info msg="CreateContainer within sandbox \"3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:30:30.475462 containerd[1621]: time="2025-01-17T12:30:30.475343278Z" level=info msg="CreateContainer within sandbox \"3d99d6cd1583b447a5755e8327de4b00e00409fdf8be6ae793ee3a4785025a1e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ad0027206e1a91910dbc6d5cc0eeda7d59fe0de9bff6156b71fea66f8e601b7a\"" Jan 17 12:30:30.477346 containerd[1621]: time="2025-01-17T12:30:30.475783454Z" level=info msg="StartContainer for \"ad0027206e1a91910dbc6d5cc0eeda7d59fe0de9bff6156b71fea66f8e601b7a\"" Jan 17 12:30:30.503341 kubelet[3028]: I0117 12:30:30.503301 3028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:30:30.510454 kubelet[3028]: I0117 12:30:30.510415 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56819374-4d91-402c-9bde-4c25e8a9620e-tigera-ca-bundle\") pod \"calico-typha-5966648df-g8c58\" (UID: \"56819374-4d91-402c-9bde-4c25e8a9620e\") " pod="calico-system/calico-typha-5966648df-g8c58" Jan 17 12:30:30.510896 kubelet[3028]: I0117 12:30:30.510654 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrd4l\" (UniqueName: \"kubernetes.io/projected/56819374-4d91-402c-9bde-4c25e8a9620e-kube-api-access-jrd4l\") pod \"calico-typha-5966648df-g8c58\" (UID: \"56819374-4d91-402c-9bde-4c25e8a9620e\") " pod="calico-system/calico-typha-5966648df-g8c58" Jan 17 12:30:30.510896 kubelet[3028]: I0117 12:30:30.510688 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56819374-4d91-402c-9bde-4c25e8a9620e-typha-certs\") pod \"calico-typha-5966648df-g8c58\" (UID: \"56819374-4d91-402c-9bde-4c25e8a9620e\") " pod="calico-system/calico-typha-5966648df-g8c58" Jan 17 12:30:30.575618 containerd[1621]: time="2025-01-17T12:30:30.575573462Z" level=info msg="StartContainer for \"ad0027206e1a91910dbc6d5cc0eeda7d59fe0de9bff6156b71fea66f8e601b7a\" returns successfully" Jan 17 12:30:30.625315 containerd[1621]: time="2025-01-17T12:30:30.625265871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5966648df-g8c58,Uid:56819374-4d91-402c-9bde-4c25e8a9620e,Namespace:calico-system,Attempt:0,}" Jan 17 12:30:30.673405 containerd[1621]: time="2025-01-17T12:30:30.672575830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:30:30.673405 containerd[1621]: time="2025-01-17T12:30:30.673311980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:30:30.673405 containerd[1621]: time="2025-01-17T12:30:30.673325346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:30.673668 containerd[1621]: time="2025-01-17T12:30:30.673445531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:30.717536 systemd[1]: run-containerd-runc-k8s.io-56166601f60cce777d65b74912d2a7747e6a62d9d9e205e40dcefd66dd55c18f-runc.LQptcj.mount: Deactivated successfully. Jan 17 12:30:30.770296 kubelet[3028]: I0117 12:30:30.769834 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fb49fb4b9-52spt" podStartSLOduration=3.769795575 podStartE2EDuration="3.769795575s" podCreationTimestamp="2025-01-17 12:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:30:30.738097231 +0000 UTC m=+87.650128683" watchObservedRunningTime="2025-01-17 12:30:30.769795575 +0000 UTC m=+87.681827026" Jan 17 12:30:30.843014 containerd[1621]: time="2025-01-17T12:30:30.842966999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5966648df-g8c58,Uid:56819374-4d91-402c-9bde-4c25e8a9620e,Namespace:calico-system,Attempt:0,} returns sandbox id \"56166601f60cce777d65b74912d2a7747e6a62d9d9e205e40dcefd66dd55c18f\"" Jan 17 12:30:30.860098 containerd[1621]: time="2025-01-17T12:30:30.860025369Z" level=info msg="CreateContainer within sandbox \"56166601f60cce777d65b74912d2a7747e6a62d9d9e205e40dcefd66dd55c18f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:30:30.869703 containerd[1621]: time="2025-01-17T12:30:30.869675351Z" level=info msg="CreateContainer within sandbox \"56166601f60cce777d65b74912d2a7747e6a62d9d9e205e40dcefd66dd55c18f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a26004bdc7b709dc1d24c23092c0a935c87dcd2dc1137e2b63ee8ee3bc80f34e\"" Jan 17 12:30:30.870503 containerd[1621]: time="2025-01-17T12:30:30.870453311Z" level=info msg="StartContainer for \"a26004bdc7b709dc1d24c23092c0a935c87dcd2dc1137e2b63ee8ee3bc80f34e\"" Jan 17 12:30:30.939365 containerd[1621]: time="2025-01-17T12:30:30.939272129Z" level=info msg="StartContainer for \"a26004bdc7b709dc1d24c23092c0a935c87dcd2dc1137e2b63ee8ee3bc80f34e\" returns successfully" Jan 17 12:30:31.766756 kubelet[3028]: I0117 12:30:31.766302 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-v75t2" podStartSLOduration=5.765998318 podStartE2EDuration="5.765998318s" podCreationTimestamp="2025-01-17 12:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:30:30.771085195 +0000 UTC m=+87.683116666" watchObservedRunningTime="2025-01-17 12:30:31.765998318 +0000 UTC m=+88.678029779" Jan 17 12:30:31.781388 kubelet[3028]: I0117 12:30:31.780679 3028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5966648df-g8c58" podStartSLOduration=5.780649202 podStartE2EDuration="5.780649202s" podCreationTimestamp="2025-01-17 12:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:30:31.767019853 +0000 UTC m=+88.679051325" watchObservedRunningTime="2025-01-17 12:30:31.780649202 +0000 UTC m=+88.692680653" Jan 17 12:30:31.822655 systemd[1]: run-containerd-runc-k8s.io-9fb5ae0d845496899bb6129f5ee5d75a721f36093ba7b717162a9a582c63b7e2-runc.NrwHG9.mount: Deactivated successfully. Jan 17 12:30:31.950448 systemd-networkd[1250]: cali21da9119f56: Gained IPv6LL Jan 17 12:30:32.272265 systemd-journald[1169]: Under memory pressure, flushing caches. Jan 17 12:30:32.270871 systemd-resolved[1506]: Under memory pressure, flushing caches. Jan 17 12:30:32.270915 systemd-resolved[1506]: Flushed all caches. Jan 17 12:30:34.318489 systemd-resolved[1506]: Under memory pressure, flushing caches. Jan 17 12:30:34.321337 systemd-journald[1169]: Under memory pressure, flushing caches. Jan 17 12:30:34.318499 systemd-resolved[1506]: Flushed all caches. Jan 17 12:30:57.342780 systemd[1]: run-containerd-runc-k8s.io-9fb5ae0d845496899bb6129f5ee5d75a721f36093ba7b717162a9a582c63b7e2-runc.m6t7gm.mount: Deactivated successfully. Jan 17 12:31:00.243649 systemd[1]: run-containerd-runc-k8s.io-ad0027206e1a91910dbc6d5cc0eeda7d59fe0de9bff6156b71fea66f8e601b7a-runc.YpcKjR.mount: Deactivated successfully. Jan 17 12:31:04.250368 containerd[1621]: time="2025-01-17T12:31:04.245122704Z" level=info msg="StopPodSandbox for \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\"" Jan 17 12:31:04.251027 containerd[1621]: time="2025-01-17T12:31:04.250383765Z" level=info msg="TearDown network for sandbox \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\" successfully" Jan 17 12:31:04.251027 containerd[1621]: time="2025-01-17T12:31:04.250404474Z" level=info msg="StopPodSandbox for \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\" returns successfully" Jan 17 12:31:04.254604 containerd[1621]: time="2025-01-17T12:31:04.254577511Z" level=info msg="RemovePodSandbox for \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\"" Jan 17 12:31:04.254664 containerd[1621]: time="2025-01-17T12:31:04.254607016Z" level=info msg="Forcibly stopping sandbox \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\"" Jan 17 12:31:04.254692 containerd[1621]: time="2025-01-17T12:31:04.254666408Z" level=info msg="TearDown network for sandbox \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\" successfully" Jan 17 12:31:04.259148 containerd[1621]: time="2025-01-17T12:31:04.259108632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:31:04.259148 containerd[1621]: time="2025-01-17T12:31:04.259162233Z" level=info msg="RemovePodSandbox \"e595c90bcf4096461b6fb41e9e07d77dd18f1e0580deffb7d5844e3eeebac6dc\" returns successfully" Jan 17 12:31:04.259469 containerd[1621]: time="2025-01-17T12:31:04.259439633Z" level=info msg="StopPodSandbox for \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\"" Jan 17 12:31:04.259513 containerd[1621]: time="2025-01-17T12:31:04.259498964Z" level=info msg="TearDown network for sandbox \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" successfully" Jan 17 12:31:04.259513 containerd[1621]: time="2025-01-17T12:31:04.259508452Z" level=info msg="StopPodSandbox for \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" returns successfully" Jan 17 12:31:04.260084 containerd[1621]: time="2025-01-17T12:31:04.259748823Z" level=info msg="RemovePodSandbox for \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\"" Jan 17 12:31:04.260084 containerd[1621]: time="2025-01-17T12:31:04.259774040Z" level=info msg="Forcibly stopping sandbox \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\"" Jan 17 12:31:04.260084 containerd[1621]: time="2025-01-17T12:31:04.259825717Z" level=info msg="TearDown network for sandbox \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" successfully" Jan 17 12:31:04.263151 containerd[1621]: time="2025-01-17T12:31:04.263045096Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:31:04.263151 containerd[1621]: time="2025-01-17T12:31:04.263085502Z" level=info msg="RemovePodSandbox \"cd8ce68c5e5655ba150c1f41988af2a72608b13d15936bed5bbb90ac6ab452cf\" returns successfully" Jan 17 12:31:04.263545 containerd[1621]: time="2025-01-17T12:31:04.263471467Z" level=info msg="StopPodSandbox for \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\"" Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.446 [WARNING][6967] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.447 [INFO][6967] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.447 [INFO][6967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" iface="eth0" netns="" Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.447 [INFO][6967] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.447 [INFO][6967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.546 [INFO][6973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.547 [INFO][6973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.547 [INFO][6973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.553 [WARNING][6973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.553 [INFO][6973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.555 [INFO][6973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:31:04.561026 containerd[1621]: 2025-01-17 12:31:04.558 [INFO][6967] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:31:04.561026 containerd[1621]: time="2025-01-17T12:31:04.560951789Z" level=info msg="TearDown network for sandbox \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\" successfully" Jan 17 12:31:04.561026 containerd[1621]: time="2025-01-17T12:31:04.560973630Z" level=info msg="StopPodSandbox for \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\" returns successfully" Jan 17 12:31:04.564761 containerd[1621]: time="2025-01-17T12:31:04.561852027Z" level=info msg="RemovePodSandbox for \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\"" Jan 17 12:31:04.564761 containerd[1621]: time="2025-01-17T12:31:04.561882795Z" level=info msg="Forcibly stopping sandbox \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\"" Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.597 [WARNING][6991] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" WorkloadEndpoint="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.598 [INFO][6991] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.598 [INFO][6991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" iface="eth0" netns="" Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.598 [INFO][6991] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.598 [INFO][6991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.625 [INFO][6997] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.625 [INFO][6997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.625 [INFO][6997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.630 [WARNING][6997] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.630 [INFO][6997] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" HandleID="k8s-pod-network.32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Workload="ci--4081--3--0--6--80d8e78ae3-k8s-calico--kube--controllers--5bbfc69fcf--t5lf6-eth0" Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.631 [INFO][6997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:31:04.636150 containerd[1621]: 2025-01-17 12:31:04.633 [INFO][6991] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4" Jan 17 12:31:04.637279 containerd[1621]: time="2025-01-17T12:31:04.636247471Z" level=info msg="TearDown network for sandbox \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\" successfully" Jan 17 12:31:04.639919 containerd[1621]: time="2025-01-17T12:31:04.639887079Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:31:04.639998 containerd[1621]: time="2025-01-17T12:31:04.639947662Z" level=info msg="RemovePodSandbox \"32b98fde04923081671614a56da6f97ebb3f411b1e28905d4ed725017ac968b4\" returns successfully" Jan 17 12:32:40.535885 update_engine[1602]: I20250117 12:32:40.535745 1602 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 12:32:40.535885 update_engine[1602]: I20250117 12:32:40.535818 1602 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 12:32:40.544070 update_engine[1602]: I20250117 12:32:40.537801 1602 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 12:32:40.544070 update_engine[1602]: I20250117 12:32:40.543526 1602 omaha_request_params.cc:62] Current group set to lts Jan 17 12:32:40.544070 update_engine[1602]: I20250117 12:32:40.543707 1602 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 12:32:40.544070 update_engine[1602]: I20250117 12:32:40.543720 1602 update_attempter.cc:643] Scheduling an action processor start. Jan 17 12:32:40.544070 update_engine[1602]: I20250117 12:32:40.543740 1602 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 12:32:40.544070 update_engine[1602]: I20250117 12:32:40.543782 1602 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 12:32:40.544070 update_engine[1602]: I20250117 12:32:40.543849 1602 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 12:32:40.544070 update_engine[1602]: I20250117 12:32:40.543861 1602 omaha_request_action.cc:272] Request: Jan 17 12:32:40.544070 update_engine[1602]: Jan 17 12:32:40.544070 update_engine[1602]: Jan 17 12:32:40.544070 update_engine[1602]: Jan 17 12:32:40.544070 update_engine[1602]: Jan 17 12:32:40.544070 update_engine[1602]: Jan 17 12:32:40.544070 update_engine[1602]: Jan 17 12:32:40.544070 update_engine[1602]: Jan 17 12:32:40.544070 update_engine[1602]: Jan 17 12:32:40.544070 update_engine[1602]: I20250117 12:32:40.543870 1602 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:32:40.559808 locksmithd[1640]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 12:32:40.560849 update_engine[1602]: I20250117 12:32:40.560230 1602 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:32:40.560849 update_engine[1602]: I20250117 12:32:40.560524 1602 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:32:40.563072 update_engine[1602]: E20250117 12:32:40.563030 1602 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:32:40.563134 update_engine[1602]: I20250117 12:32:40.563100 1602 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 12:32:50.417565 update_engine[1602]: I20250117 12:32:50.417472 1602 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:32:50.418073 update_engine[1602]: I20250117 12:32:50.417716 1602 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:32:50.418073 update_engine[1602]: I20250117 12:32:50.417986 1602 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:32:50.418740 update_engine[1602]: E20250117 12:32:50.418617 1602 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:32:50.418740 update_engine[1602]: I20250117 12:32:50.418659 1602 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 12:33:00.413201 update_engine[1602]: I20250117 12:33:00.413090 1602 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:33:00.413943 update_engine[1602]: I20250117 12:33:00.413664 1602 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:33:00.414149 update_engine[1602]: I20250117 12:33:00.414052 1602 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:33:00.415007 update_engine[1602]: E20250117 12:33:00.414944 1602 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:33:00.415081 update_engine[1602]: I20250117 12:33:00.415040 1602 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 12:33:10.417525 update_engine[1602]: I20250117 12:33:10.417426 1602 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:33:10.417982 update_engine[1602]: I20250117 12:33:10.417790 1602 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:33:10.418131 update_engine[1602]: I20250117 12:33:10.418087 1602 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:33:10.418893 update_engine[1602]: E20250117 12:33:10.418853 1602 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:33:10.419759 update_engine[1602]: I20250117 12:33:10.418920 1602 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 12:33:10.419759 update_engine[1602]: I20250117 12:33:10.418938 1602 omaha_request_action.cc:617] Omaha request response: Jan 17 12:33:10.419759 update_engine[1602]: E20250117 12:33:10.419045 1602 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 12:33:10.419759 update_engine[1602]: I20250117 12:33:10.419080 1602 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 12:33:10.419759 update_engine[1602]: I20250117 12:33:10.419093 1602 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:33:10.419759 update_engine[1602]: I20250117 12:33:10.419104 1602 update_attempter.cc:306] Processing Done. Jan 17 12:33:10.419759 update_engine[1602]: E20250117 12:33:10.419126 1602 update_attempter.cc:619] Update failed. Jan 17 12:33:10.419759 update_engine[1602]: I20250117 12:33:10.419140 1602 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 12:33:10.419759 update_engine[1602]: I20250117 12:33:10.419152 1602 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 12:33:10.419759 update_engine[1602]: I20250117 12:33:10.419211 1602 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 12:33:10.419759 update_engine[1602]: I20250117 12:33:10.419319 1602 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 12:33:10.419759 update_engine[1602]: I20250117 12:33:10.419355 1602 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 12:33:10.419759 update_engine[1602]: I20250117 12:33:10.419368 1602 omaha_request_action.cc:272] Request: Jan 17 12:33:10.419759 update_engine[1602]: Jan 17 12:33:10.419759 update_engine[1602]: Jan 17 12:33:10.419759 update_engine[1602]: Jan 17 12:33:10.419759 update_engine[1602]: Jan 17 12:33:10.419759 update_engine[1602]: Jan 17 12:33:10.419759 update_engine[1602]: Jan 17 12:33:10.421152 update_engine[1602]: I20250117 12:33:10.419381 1602 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:33:10.421152 update_engine[1602]: I20250117 12:33:10.419596 1602 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:33:10.421152 update_engine[1602]: I20250117 12:33:10.419813 1602 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:33:10.421152 update_engine[1602]: E20250117 12:33:10.420673 1602 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:33:10.421152 update_engine[1602]: I20250117 12:33:10.420730 1602 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 12:33:10.421152 update_engine[1602]: I20250117 12:33:10.420748 1602 omaha_request_action.cc:617] Omaha request response: Jan 17 12:33:10.421152 update_engine[1602]: I20250117 12:33:10.420761 1602 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:33:10.421152 update_engine[1602]: I20250117 12:33:10.420771 1602 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:33:10.421152 update_engine[1602]: I20250117 12:33:10.420778 1602 update_attempter.cc:306] Processing Done. Jan 17 12:33:10.421152 update_engine[1602]: I20250117 12:33:10.420786 1602 update_attempter.cc:310] Error event sent. Jan 17 12:33:10.421152 update_engine[1602]: I20250117 12:33:10.420797 1602 update_check_scheduler.cc:74] Next update check in 43m16s Jan 17 12:33:10.422145 locksmithd[1640]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 12:33:10.422145 locksmithd[1640]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 12:33:55.492833 systemd[1]: Started sshd@7-49.12.221.202:22-139.178.89.65:55542.service - OpenSSH per-connection server daemon (139.178.89.65:55542). Jan 17 12:33:56.527711 sshd[7351]: Accepted publickey for core from 139.178.89.65 port 55542 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:33:56.530658 sshd[7351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:33:56.538721 systemd-logind[1596]: New session 8 of user core. Jan 17 12:33:56.543995 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:33:57.762386 sshd[7351]: pam_unix(sshd:session): session closed for user core Jan 17 12:33:57.769439 systemd[1]: sshd@7-49.12.221.202:22-139.178.89.65:55542.service: Deactivated successfully. Jan 17 12:33:57.781269 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:33:57.783163 systemd-logind[1596]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:33:57.785751 systemd-logind[1596]: Removed session 8. Jan 17 12:34:02.924431 systemd[1]: Started sshd@8-49.12.221.202:22-139.178.89.65:46024.service - OpenSSH per-connection server daemon (139.178.89.65:46024). Jan 17 12:34:03.919839 sshd[7421]: Accepted publickey for core from 139.178.89.65 port 46024 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:03.922302 sshd[7421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:03.927601 systemd-logind[1596]: New session 9 of user core. Jan 17 12:34:03.932529 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:34:04.682095 sshd[7421]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:04.685981 systemd[1]: sshd@8-49.12.221.202:22-139.178.89.65:46024.service: Deactivated successfully. Jan 17 12:34:04.690748 systemd-logind[1596]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:34:04.691220 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:34:04.693913 systemd-logind[1596]: Removed session 9. Jan 17 12:34:09.853067 systemd[1]: Started sshd@9-49.12.221.202:22-139.178.89.65:46026.service - OpenSSH per-connection server daemon (139.178.89.65:46026). Jan 17 12:34:10.846427 sshd[7439]: Accepted publickey for core from 139.178.89.65 port 46026 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:10.848632 sshd[7439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:10.853099 systemd-logind[1596]: New session 10 of user core. Jan 17 12:34:10.857683 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:34:11.602531 sshd[7439]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:11.606263 systemd[1]: sshd@9-49.12.221.202:22-139.178.89.65:46026.service: Deactivated successfully. Jan 17 12:34:11.610963 systemd-logind[1596]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:34:11.611508 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:34:11.613982 systemd-logind[1596]: Removed session 10. Jan 17 12:34:11.767634 systemd[1]: Started sshd@10-49.12.221.202:22-139.178.89.65:51588.service - OpenSSH per-connection server daemon (139.178.89.65:51588). Jan 17 12:34:12.743313 sshd[7454]: Accepted publickey for core from 139.178.89.65 port 51588 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:12.745145 sshd[7454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:12.750399 systemd-logind[1596]: New session 11 of user core. Jan 17 12:34:12.756503 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:34:13.543957 sshd[7454]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:13.550628 systemd[1]: sshd@10-49.12.221.202:22-139.178.89.65:51588.service: Deactivated successfully. Jan 17 12:34:13.554370 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:34:13.555509 systemd-logind[1596]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:34:13.556755 systemd-logind[1596]: Removed session 11. Jan 17 12:34:13.705777 systemd[1]: Started sshd@11-49.12.221.202:22-139.178.89.65:51604.service - OpenSSH per-connection server daemon (139.178.89.65:51604). Jan 17 12:34:14.678334 sshd[7466]: Accepted publickey for core from 139.178.89.65 port 51604 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:14.680396 sshd[7466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:14.685499 systemd-logind[1596]: New session 12 of user core. Jan 17 12:34:14.690524 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:34:15.411125 sshd[7466]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:15.415308 systemd[1]: sshd@11-49.12.221.202:22-139.178.89.65:51604.service: Deactivated successfully. Jan 17 12:34:15.423816 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:34:15.425207 systemd-logind[1596]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:34:15.426512 systemd-logind[1596]: Removed session 12. Jan 17 12:34:20.575082 systemd[1]: Started sshd@12-49.12.221.202:22-139.178.89.65:51612.service - OpenSSH per-connection server daemon (139.178.89.65:51612). Jan 17 12:34:21.539912 sshd[7486]: Accepted publickey for core from 139.178.89.65 port 51612 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:21.542001 sshd[7486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:21.548055 systemd-logind[1596]: New session 13 of user core. Jan 17 12:34:21.552560 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:34:22.301213 sshd[7486]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:22.305744 systemd[1]: sshd@12-49.12.221.202:22-139.178.89.65:51612.service: Deactivated successfully. Jan 17 12:34:22.310822 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:34:22.311859 systemd-logind[1596]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:34:22.313069 systemd-logind[1596]: Removed session 13. Jan 17 12:34:27.462751 systemd[1]: Started sshd@13-49.12.221.202:22-139.178.89.65:38566.service - OpenSSH per-connection server daemon (139.178.89.65:38566). Jan 17 12:34:28.422959 sshd[7522]: Accepted publickey for core from 139.178.89.65 port 38566 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:28.425448 sshd[7522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:28.431905 systemd-logind[1596]: New session 14 of user core. Jan 17 12:34:28.436461 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:34:29.166124 sshd[7522]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:29.169870 systemd[1]: sshd@13-49.12.221.202:22-139.178.89.65:38566.service: Deactivated successfully. Jan 17 12:34:29.176061 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:34:29.177458 systemd-logind[1596]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:34:29.178876 systemd-logind[1596]: Removed session 14. Jan 17 12:34:34.330859 systemd[1]: Started sshd@14-49.12.221.202:22-139.178.89.65:49564.service - OpenSSH per-connection server daemon (139.178.89.65:49564). Jan 17 12:34:35.319856 sshd[7574]: Accepted publickey for core from 139.178.89.65 port 49564 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:35.323787 sshd[7574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:35.334359 systemd-logind[1596]: New session 15 of user core. Jan 17 12:34:35.341753 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:34:36.051172 sshd[7574]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:36.054659 systemd[1]: sshd@14-49.12.221.202:22-139.178.89.65:49564.service: Deactivated successfully. Jan 17 12:34:36.059592 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:34:36.060927 systemd-logind[1596]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:34:36.062843 systemd-logind[1596]: Removed session 15. Jan 17 12:34:36.216714 systemd[1]: Started sshd@15-49.12.221.202:22-139.178.89.65:49574.service - OpenSSH per-connection server daemon (139.178.89.65:49574). Jan 17 12:34:37.189646 sshd[7587]: Accepted publickey for core from 139.178.89.65 port 49574 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:37.191793 sshd[7587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:37.196894 systemd-logind[1596]: New session 16 of user core. Jan 17 12:34:37.202581 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:34:38.165431 sshd[7587]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:38.171144 systemd[1]: sshd@15-49.12.221.202:22-139.178.89.65:49574.service: Deactivated successfully. Jan 17 12:34:38.176991 systemd-logind[1596]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:34:38.177977 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:34:38.179512 systemd-logind[1596]: Removed session 16. Jan 17 12:34:38.326691 systemd[1]: Started sshd@16-49.12.221.202:22-139.178.89.65:49590.service - OpenSSH per-connection server daemon (139.178.89.65:49590). Jan 17 12:34:39.302238 sshd[7599]: Accepted publickey for core from 139.178.89.65 port 49590 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:39.305055 sshd[7599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:39.312121 systemd-logind[1596]: New session 17 of user core. Jan 17 12:34:39.317664 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:34:41.914481 sshd[7599]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:41.923382 systemd[1]: sshd@16-49.12.221.202:22-139.178.89.65:49590.service: Deactivated successfully. Jan 17 12:34:41.927325 systemd-logind[1596]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:34:41.927962 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:34:41.931053 systemd-logind[1596]: Removed session 17. Jan 17 12:34:42.073776 systemd[1]: Started sshd@17-49.12.221.202:22-139.178.89.65:36504.service - OpenSSH per-connection server daemon (139.178.89.65:36504). Jan 17 12:34:42.382815 systemd-resolved[1506]: Under memory pressure, flushing caches. Jan 17 12:34:42.388060 systemd-journald[1169]: Under memory pressure, flushing caches. Jan 17 12:34:42.382824 systemd-resolved[1506]: Flushed all caches. Jan 17 12:34:43.048611 sshd[7618]: Accepted publickey for core from 139.178.89.65 port 36504 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:43.051060 sshd[7618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:43.056523 systemd-logind[1596]: New session 18 of user core. Jan 17 12:34:43.063833 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:34:44.108053 sshd[7618]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:44.113590 systemd[1]: sshd@17-49.12.221.202:22-139.178.89.65:36504.service: Deactivated successfully. Jan 17 12:34:44.118890 systemd-logind[1596]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:34:44.119338 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:34:44.121329 systemd-logind[1596]: Removed session 18. Jan 17 12:34:44.275502 systemd[1]: Started sshd@18-49.12.221.202:22-139.178.89.65:36508.service - OpenSSH per-connection server daemon (139.178.89.65:36508). Jan 17 12:34:44.433254 systemd-journald[1169]: Under memory pressure, flushing caches. Jan 17 12:34:44.430708 systemd-resolved[1506]: Under memory pressure, flushing caches. Jan 17 12:34:44.430717 systemd-resolved[1506]: Flushed all caches. Jan 17 12:34:45.282048 sshd[7630]: Accepted publickey for core from 139.178.89.65 port 36508 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:45.284166 sshd[7630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:45.289211 systemd-logind[1596]: New session 19 of user core. Jan 17 12:34:45.296619 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:34:46.037937 sshd[7630]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:46.042053 systemd[1]: sshd@18-49.12.221.202:22-139.178.89.65:36508.service: Deactivated successfully. Jan 17 12:34:46.047931 systemd-logind[1596]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:34:46.048407 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:34:46.050366 systemd-logind[1596]: Removed session 19. Jan 17 12:34:51.197974 systemd[1]: Started sshd@19-49.12.221.202:22-139.178.89.65:36512.service - OpenSSH per-connection server daemon (139.178.89.65:36512). Jan 17 12:34:52.172371 sshd[7649]: Accepted publickey for core from 139.178.89.65 port 36512 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:52.174137 sshd[7649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:52.179240 systemd-logind[1596]: New session 20 of user core. Jan 17 12:34:52.183489 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:34:52.906336 sshd[7649]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:52.910616 systemd[1]: sshd@19-49.12.221.202:22-139.178.89.65:36512.service: Deactivated successfully. Jan 17 12:34:52.915718 systemd-logind[1596]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:34:52.916284 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:34:52.918573 systemd-logind[1596]: Removed session 20. Jan 17 12:34:57.300907 systemd[1]: run-containerd-runc-k8s.io-9fb5ae0d845496899bb6129f5ee5d75a721f36093ba7b717162a9a582c63b7e2-runc.oyqRIx.mount: Deactivated successfully. Jan 17 12:34:58.073502 systemd[1]: Started sshd@20-49.12.221.202:22-139.178.89.65:48654.service - OpenSSH per-connection server daemon (139.178.89.65:48654). Jan 17 12:34:59.044293 sshd[7689]: Accepted publickey for core from 139.178.89.65 port 48654 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:59.046454 sshd[7689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:59.051561 systemd-logind[1596]: New session 21 of user core. Jan 17 12:34:59.057480 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:34:59.790306 sshd[7689]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:59.795715 systemd[1]: sshd@20-49.12.221.202:22-139.178.89.65:48654.service: Deactivated successfully. Jan 17 12:34:59.795752 systemd-logind[1596]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:34:59.800554 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:34:59.801834 systemd-logind[1596]: Removed session 21. Jan 17 12:35:00.243242 systemd[1]: run-containerd-runc-k8s.io-ad0027206e1a91910dbc6d5cc0eeda7d59fe0de9bff6156b71fea66f8e601b7a-runc.0GX68W.mount: Deactivated successfully. Jan 17 12:35:04.960443 systemd[1]: Started sshd@21-49.12.221.202:22-139.178.89.65:50210.service - OpenSSH per-connection server daemon (139.178.89.65:50210). Jan 17 12:35:05.945307 sshd[7724]: Accepted publickey for core from 139.178.89.65 port 50210 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:05.947432 sshd[7724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:05.952296 systemd-logind[1596]: New session 22 of user core. Jan 17 12:35:05.957602 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:35:06.697504 sshd[7724]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:06.701630 systemd[1]: sshd@21-49.12.221.202:22-139.178.89.65:50210.service: Deactivated successfully. Jan 17 12:35:06.707504 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:35:06.708267 systemd-logind[1596]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:35:06.709396 systemd-logind[1596]: Removed session 22.